modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
sankniju/blockassist-bc-plump_carnivorous_chimpanzee_1755064064
|
sankniju
| 2025-08-13T05:49:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump carnivorous chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:48:54Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump carnivorous chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-6.6b-specialized-health_or_medicine-pruned-moe-only-8-experts
|
AmanPriyanshu
| 2025-08-13T05:44:20Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"health-or-medicine",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:44:01Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- health-or-medicine
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Health Or Medicine GPT-OSS Model (8 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 8 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~6.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 8 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Health Or Medicine |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 8 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 25.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to health or medicine tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 8 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 8 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.6b-specialized-health_or_medicine-pruned-moe-only-8-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-health_or_medicine-pruned-moe-only-8-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.6b-specialized-health_or_medicine-pruned-moe-only-8-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-health_or_medicine-pruned-moe-only-8-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on health or medicine tasks:
### Example 1
**Question:** What are the main functions of the human heart?
**Response:**
```
<|channel|>analysis<|message|>Need answer.<|end|><|start|>assistant<|channel|>final<|message|>The heart is the central organ that pumps blood and regulates the body’s circulation. Its main functions can be broken into two‑basic categories:
| 1. **Pacing and pumping** | The heart’s electrical system (the “pac‑node” and the “bundle of His”) sets the rhythm of heartbeat. The left‑ventular and right‑ventular chambers contract in a coordinated “systic” (s) and “diastic” (d) phase, pushing blood from the heart into the systemic circulation. | 2. **Regulating blood‑volume and pressure** | The heart’s valves (aortic, mitovral, trtr, etc.) keep the blood in the right‑ventular and left‑ventular chambers. The heart’s “cardiac output” (the amount of blood pumped) is regulated by the “cardiac preload” (the amount of blood in the left‑ventular chamber) and the “cardiac afterload” (the amount of blood pumped out). | 3. **Mainting oxygen and nutrient delivery** | The heart’s pumping ensures that oxygenated blood reaches to the tissues, and that nutrients (glucose, electrolys, etc.) are delivered to the tissues. | 4. **Hormal and neuro‑chemical regulation** | The heart’s releases hormones (like adrenaline, angangi‑t) and neuro‑hadrin‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑nead‑ne
```
---
### Example 2
**Question:** Explain the difference between bacteria and viruses.
**Response:**
```
<|channel|>analysis<|message|>Need explain difference between bacteria and viruses. Provide details: classification, structure, reproduction, disease, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viric‑s – the “living” vs. “non‑living” world**
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
```
---
### Example 3
**Question:** What are the symptoms and causes of diabetes?
**Response:**
```
<|channel|>analysis<|message|>Need to explain symptoms and causes.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes** is a chronic disease in the body’s “sins” that the blood can’t stay in the “sweet” sugar level.
It’s a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑s‑s) that the “s” (s‑s‑s) is a “s” (s‑
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755063764
|
IvanJAjebu
| 2025-08-13T05:44:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:43:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/YingdaLi_Llama-2-7b-hf-GGUF
|
tensorblock
| 2025-08-13T05:43:22Z
| 0
| 0
| null |
[
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:YingdaLi/Llama-2-7b-hf",
"base_model:quantized:YingdaLi/Llama-2-7b-hf",
"region:us"
] |
text-generation
| 2025-08-13T04:30:50Z
|
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: This is a form to enable access to Llama 2 on Hugging Face
after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this form.
Requests will be processed in 1-2 days.
extra_gated_prompt: '**Your Hugging Face account email address MUST match the email
you provide on the Meta website, or your request will not be approved.**'
extra_gated_button_content: Submit
extra_gated_fields:
? I agree to share my name, email address and username with Meta and confirm that
I have already been granted download access on the Meta website
: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- TensorBlock
- GGUF
base_model: YingdaLi/Llama-2-7b-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## YingdaLi/Llama-2-7b-hf - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [YingdaLi/Llama-2-7b-hf](https://huggingface.co/YingdaLi/Llama-2-7b-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-2-7b-hf-Q2_K.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-2-7b-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Llama-2-7b-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Llama-2-7b-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Llama-2-7b-hf-Q4_0.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-2-7b-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Llama-2-7b-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Llama-2-7b-hf-Q5_0.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-2-7b-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Llama-2-7b-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Llama-2-7b-hf-Q6_K.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Llama-2-7b-hf-Q8_0.gguf](https://huggingface.co/tensorblock/YingdaLi_Llama-2-7b-hf-GGUF/blob/main/Llama-2-7b-hf-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/YingdaLi_Llama-2-7b-hf-GGUF --include "Llama-2-7b-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/YingdaLi_Llama-2-7b-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
RE-N-Y/WeTok-f8-c32
|
RE-N-Y
| 2025-08-13T05:41:11Z
| 0
| 0
| null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-13T05:40:19Z
|
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755063586
|
llencia
| 2025-08-13T05:40:16Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:40:05Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nofing/qwen3-4B-sft-all
|
Nofing
| 2025-08-13T05:40:01Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T05:28:23Z
|
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nofing
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755063372
|
llencia
| 2025-08-13T05:36:44Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:36:29Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-3.6b-specialized-health_or_medicine-pruned-moe-only-3-experts
|
AmanPriyanshu
| 2025-08-13T05:35:41Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"health-or-medicine",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:35:27Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- health-or-medicine
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Health Or Medicine GPT-OSS Model (3 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 3 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~3.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 3 |
| **Layers** | 24 |
| **Top-k Routing** | 3 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Health Or Medicine |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 3 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 9.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to health or medicine tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 3 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 3 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.6b-specialized-health_or_medicine-pruned-moe-only-3-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.6b-specialized-health_or_medicine-pruned-moe-only-3-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.6b-specialized-health_or_medicine-pruned-moe-only-3-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.6b-specialized-health_or_medicine-pruned-moe-only-3-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on health or medicine tasks:
### Example 1
**Question:** What are the main functions of the human heart?
**Response:**
```
<|channel|>analysis<|message|>We need a question: "What are the main functions of the human heart?" presumably asking about functions of the heart? maybe "What are the main functions of the human heart?" or "What are the main functions of the human heart?" maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main functions of the human heart?"? maybe "What are the main
```
---
### Example 2
**Question:** Explain the difference between bacteria and viruses.
**Response:**
```
<|channel|>analysis<|message|>We need to explain difference between bacteria and viruses. The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference between bacteria and viruses? The difference
```
---
### Example 3
**Question:** What are the symptoms and causes of diabetes?
**Response:**
```
<|channel|>analysis<|message|>We need answer about symptoms and causes of diabetes? basically what are the symptoms and causes of diabetes? maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms and causes of diabetes?" maybe ask: "What are the symptoms
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
saim1212/qwen2_2b_fullfinetuning_30epoch
|
saim1212
| 2025-08-13T05:35:41Z
| 2
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:saim1212/qwen2_2b_fullfinetuningvisiononly_30epoch",
"base_model:adapter:saim1212/qwen2_2b_fullfinetuningvisiononly_30epoch",
"license:other",
"region:us"
] | null | 2025-02-24T20:20:49Z
|
---
library_name: peft
license: other
base_model: saim1212/qwen2_2b_fullfinetuningvisiononly_30epoch
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: qwen2vl_lora_16lr_7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2vl_lora_16lr_7b
This model is a fine-tuned version of [saim1212/qwen2_2b_fullfinetuningvisiononly_30epoch](https://huggingface.co/saim1212/qwen2_2b_fullfinetuningvisiononly_30epoch) on the talk2car dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755063241
|
IvanJAjebu
| 2025-08-13T05:35:24Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:34:58Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755063162
|
llencia
| 2025-08-13T05:33:16Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:33:04Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hariby/gpt-oss-20b-multilingual-reasoner
|
hariby
| 2025-08-13T05:32:41Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T04:35:25Z
|
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hariby/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4.dev0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755062066
|
Sayemahsjn
| 2025-08-13T05:32:35Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:32:31Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-2.4b-specialized-health_or_medicine-pruned-moe-only-1-experts
|
AmanPriyanshu
| 2025-08-13T05:32:34Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"health-or-medicine",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:32:21Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- health-or-medicine
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Health Or Medicine GPT-OSS Model (1 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 1 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~2.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 1 |
| **Layers** | 24 |
| **Top-k Routing** | 1 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Health Or Medicine |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 1 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 3.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to health or medicine tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 1 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 1 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-health_or_medicine-pruned-moe-only-1-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-health_or_medicine-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-health_or_medicine-pruned-moe-only-1-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-health_or_medicine-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on health or medicine tasks:
### Example 1
**Question:** What are the main functions of the human heart?
**Response:**
```
1 true/5. 5 says 5, 5. "Answer" with a assistant type of. The main answer. "Channel" says "the correct answer" - "channel" says "the correct answer" - "channel" says "the correct answer" - "channel" says "the correct answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "main" says "the main answer" - "channel" says "the main answer" - "main" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says "the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel" says 'the main answer" - "channel
```
---
### Example 2
**Question:** Explain the difference between bacteria and viruses.
**Response:**
```
def( * "Channel Valid can be include the message. present. "User" ... "User" ... to the task of the reading by "User" ... # "User" ... "User" ... to the question. "User" ... ... "User" ... ... to the final point. "User" ... "User" ... ... ... to the final point. "User" ... ... "User" ... ... ... to the final point. "User" ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... "User" ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... "User" ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... "User" ... ... ... "User" ... ... ... "User" ... ... "User" ... ... ... "User" ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... ... "User" ... ... - "User" ... ... "User" ... ... "User" ... ... - "User" ... ... "User" ... ... "User" ... - "User" ... - "User" ... ... "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" ... - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User" - "User"
```
---
### Example 3
**Question:** What are the symptoms and causes of diabetes?
**Response:**
```
<|channel|>set- # you can answer. # user: channel channel channel channel channel channel channel or channel channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel channel channel channel channel channel channel channel channel channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel?"
...
... # ...
Answer: Yes, the content is a correct...
...
**: "channel"... "channel" has "channel"... "channel" or channel channel channel or channel channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel or channel" are correct. "User: "channel is a ... ... $... "channel is a ... ... ... ... ...
**... # answer: The content is a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct, but a correct
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
tsukadegi/blockassist-bc-hardy_amphibious_raven_1755062994
|
tsukadegi
| 2025-08-13T05:31:28Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hardy amphibious raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:31:18Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hardy amphibious raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
datasetsANDmodels/en2de
|
datasetsANDmodels
| 2025-08-13T05:31:08Z
| 0
| 0
| null |
[
"pytorch",
"marian",
"de",
"en",
"base_model:datasetsANDmodels/en2de",
"base_model:finetune:datasetsANDmodels/en2de",
"region:us"
] | null | 2025-08-13T05:24:35Z
|
---
language:
- de
- en
base_model:
- datasetsANDmodels/en2de
---
This model translates text from English to Deutsch
|
koloni/blockassist-bc-deadly_graceful_stingray_1755061486
|
koloni
| 2025-08-13T05:31:04Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:30:55Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-20.9b-specialized-math-pruned-moe-only-32-experts
|
AmanPriyanshu
| 2025-08-13T05:31:01Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:30:00Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (32 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 32 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~20.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 32 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 32 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 100.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 32 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 32 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-math-pruned-moe-only-32-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-math-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-math-pruned-moe-only-32-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-math-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve linear equation. Provide steps.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term that contains \(x\)
Subtract \(5\) from both sides of the equation to move the constant term to the right side.
\[
\begin{aligned}
2x + 5 - 5 &= 17 - 5 \\
2x &= 12
\end{aligned}
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\).
\[
\begin{aligned}
\frac{2x}{2} &= \frac{12}{2} \\
x &= 6
\end{aligned}
\]
---
### Solution
\[
\boxed{x = 6}
\]
**Verification**
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, confirming that \(x = 6\) is indeed the correct solution.<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^{2}=a^{2}+b^{2}
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It Is Used
| Purpose | How the theorem helps | Example |
|---------|-----------------------|---------|
| **Finding a missing side** | Rearrange the formula to solve for the unknown side. | A ladder leans against a wall. The ladder is 10 ft long and the base is 6 ft from the wall. Find the height: \(h=\sqrt{10^{2}-6^{2}}=8\) ft. |
| **Checking right‑angle** | Verify that \(c^{2}=a^{2}+b^{2}\). | A triangle with sides 3, 4, 5 satisfies \(5^{2}=3^{2}+4^{2}\), so it is right‑angled. |
| **Distance between two points** | Use the distance formula, which is a direct application of the theorem in coordinate geometry. | Distance between \((x_1,y_1)\) and \((x_2,y_2)\): \(\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Engineering & construction** | Determine lengths of beams, angles, or structural elements that must form right angles. | Calculating the diagonal support needed in a rectangular frame. |
| **Navigation & surveying** | Compute straight‑line distances when only horizontal and vertical components are known. | A hiker moves 200 m east and 150 m north; straight‑line distance = \(\sqrt{200^2+150^2}\approx 250\) m. |
| **Computer graphics** | Compute pixel distances, collision detection, or shading calculations. | Finding the distance between two vertices in a 2D sprite. |
| **Physics** | Resolve vectors into perpendicular components or find resultant magnitudes. | A force of 10 N at 30° to the horizontal
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
Using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \;\text{m}^{2}
\approx 153.938 \;\text{m}^{2}
\]
So the area of the circle is **\(49\pi\) square meters (about 154 m²)**.<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1755062886
|
kayacrypto
| 2025-08-13T05:30:30Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:29:47Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist-bc-squeaky_noisy_gazelle_1755062900
|
0xGareeb
| 2025-08-13T05:30:20Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky noisy gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:29:35Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky noisy gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755061748
|
aleebaster
| 2025-08-13T05:30:09Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:30:00Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755062916
|
IvanJAjebu
| 2025-08-13T05:29:46Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:29:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-20.3b-specialized-math-pruned-moe-only-31-experts
|
AmanPriyanshu
| 2025-08-13T05:28:07Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:27:11Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (31 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 31 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~20.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 31 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 31 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 96.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 31 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 31 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.3b-specialized-math-pruned-moe-only-31-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-math-pruned-moe-only-31-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.3b-specialized-math-pruned-moe-only-31-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-math-pruned-moe-only-31-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear equation.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term with \(x\)
Subtract \(5\) from both sides of the equation to get the variable term alone on the left:
\[
\begin{aligned}
2x + 5 - 5 &= 17 - 5 \\
2x &= 12
\end{aligned}
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\):
\[
\begin{aligned}
\frac{2x}{2} &= \frac{12}{2} \\
x &= 6
\end{aligned}
\]
---
### Step 3: Verify the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^{2}=a^{2}+b^{2}
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It’s Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Finding a missing side** | Rearrange the formula to solve for the unknown side. | Triangle with legs 3 m and 4 m → \(c=\sqrt{3^{2}+4^{2}}=\sqrt{9+16}=5\) m. |
| **Checking if a triangle is right‑angled** | Verify if \(c^{2}=a^{2}+b^{2}\). | Triangle sides 5, 12, 13 → \(5^{2}+12^{2}=25+144=169=13^{2}\). |
| **Distance between two points** | Treat the coordinate differences as legs. | Points (2, 3) and (5, 11): Δx=3, Δy=8 → distance \(=\sqrt{3^{2}+8^{2}}=\sqrt{9+64}=√73\). |
| **Engineering & construction** | Design right‑angled structures, calculate beam lengths, etc. | Determining the length of a diagonal support in a rectangular frame. |
| **Computer graphics** | Compute pixel distances, collision detection, etc. | Finding the Euclidean distance between two vertices in a 2‑D or 3‑D space. |
| **Navigation & GPS** | Approximate straight‑line distances over small areas. | Estimating the shortest path between two geographic coordinates. |
| **Physics** | Resolve vectors into perpendicular components. | Decomposing a force into horizontal and vertical components. |
---
### Quick Tips
- **Always label the hypotenuse** as the side opposite the right angle; it’s the longest side.
- For **non‑right triangles**, use the Law of Cosines: \(c^{2}=a^{2}+b^{2}-2ab\cos
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area πr^2 = π*49 = 49π ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^2
= 49\pi \,\text{m}^2
\]
If you want a numerical value using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \approx 153.938 \,\text{m}^2
\]
So the area is **\(49\pi\) square meters** (about **153.94 m²**).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
prl90777/qwen3_4_20250812_2132
|
prl90777
| 2025-08-13T05:25:55Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B",
"lora",
"transformers",
"base_model:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T01:44:07Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-4B
tags:
- base_model:adapter:Qwen/Qwen3-4B
- lora
- transformers
model-index:
- name: qwen3_4_20250812_2132
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3_4_20250812_2132
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3653
- Map@3: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map@3 |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 23.3553 | 0.0598 | 20 | 2.0405 | 0.5869 |
| 14.0817 | 0.1196 | 40 | 1.1413 | 0.7757 |
| 7.9694 | 0.1794 | 60 | 0.8782 | 0.8333 |
| 6.5759 | 0.2392 | 80 | 0.7345 | 0.8571 |
| 5.4951 | 0.2990 | 100 | 0.6525 | 0.8714 |
| 4.9341 | 0.3587 | 120 | 0.6328 | 0.8735 |
| 4.3622 | 0.4185 | 140 | 0.5057 | 0.9034 |
| 3.8179 | 0.4783 | 160 | 0.5399 | 0.8955 |
| 4.2917 | 0.5381 | 180 | 0.4924 | 0.9077 |
| 4.0504 | 0.5979 | 200 | 0.4972 | 0.9057 |
| 3.6507 | 0.6577 | 220 | 0.4539 | 0.9115 |
| 3.9974 | 0.7175 | 240 | 0.4652 | 0.9110 |
| 3.7596 | 0.7773 | 260 | 0.4360 | 0.9201 |
| 3.3848 | 0.8371 | 280 | 0.4552 | 0.9123 |
| 3.6286 | 0.8969 | 300 | 0.4211 | 0.9188 |
| 3.5085 | 0.9567 | 320 | 0.4150 | 0.9206 |
| 3.1587 | 1.0149 | 340 | 0.3916 | 0.9217 |
| 2.5177 | 1.0747 | 360 | 0.4076 | 0.9225 |
| 2.3485 | 1.1345 | 380 | 0.3830 | 0.9269 |
| 3.0193 | 1.1943 | 400 | 0.4031 | 0.9215 |
| 2.2161 | 1.2541 | 420 | 0.3999 | 0.9281 |
| 2.5193 | 1.3139 | 440 | 0.3834 | 0.9279 |
| 2.5606 | 1.3737 | 460 | 0.3715 | 0.9297 |
| 2.3023 | 1.4335 | 480 | 0.3678 | 0.9298 |
| 2.2246 | 1.4933 | 500 | 0.3815 | 0.9307 |
| 2.2925 | 1.5531 | 520 | 0.3624 | 0.9299 |
| 2.1651 | 1.6129 | 540 | 0.3689 | 0.9319 |
| 2.2626 | 1.6726 | 560 | 0.3705 | 0.9290 |
| 2.2447 | 1.7324 | 580 | 0.3517 | 0.9330 |
| 2.4377 | 1.7922 | 600 | 0.3537 | 0.9333 |
| 2.5225 | 1.8520 | 620 | 0.3296 | 0.9363 |
| 2.048 | 1.9118 | 640 | 0.3334 | 0.9367 |
| 1.9818 | 1.9716 | 660 | 0.3481 | 0.9368 |
| 1.988 | 2.0299 | 680 | 0.3221 | 0.9397 |
| 1.3347 | 2.0897 | 700 | 0.3506 | 0.9387 |
| 1.1025 | 2.1495 | 720 | 0.3576 | 0.9392 |
| 1.2089 | 2.2093 | 740 | 0.3518 | 0.9397 |
| 1.2152 | 2.2691 | 760 | 0.3576 | 0.9404 |
| 1.1314 | 2.3288 | 780 | 0.3620 | 0.9382 |
| 1.1089 | 2.3886 | 800 | 0.3452 | 0.9397 |
| 1.0984 | 2.4484 | 820 | 0.3630 | 0.9396 |
| 1.0739 | 2.5082 | 840 | 0.3506 | 0.9406 |
| 0.8933 | 2.5680 | 860 | 0.3614 | 0.9405 |
| 0.8392 | 2.6278 | 880 | 0.3702 | 0.9399 |
| 1.2028 | 2.6876 | 900 | 0.3655 | 0.9401 |
| 0.8721 | 2.7474 | 920 | 0.3653 | 0.9397 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AmanPriyanshu/gpt-oss-19.7b-specialized-math-pruned-moe-only-30-experts
|
AmanPriyanshu
| 2025-08-13T05:25:22Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:24:25Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (30 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 30 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~19.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 30 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 30 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 93.8% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 30 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 30 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.7b-specialized-math-pruned-moe-only-30-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-math-pruned-moe-only-30-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.7b-specialized-math-pruned-moe-only-30-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-math-pruned-moe-only-30-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing \(x\)
Subtract \(5\) from both sides of the equation to get the \(x\)-term alone on the left.
\[
\begin{aligned}
2x + 5 - 5 &= 17 - 5 \\
2x &= 12
\end{aligned}
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\).
\[
\begin{aligned}
\frac{2x}{2} &= \frac{12}{2} \\
x &= 6
\end{aligned}
\]
---
### Step 3: Verify the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^{2}=a^{2}+b^{2}
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It’s Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Finding a missing side** | Rearrange the formula to solve for the unknown side. | Triangle with legs 3 m and 4 m → \(c=\sqrt{3^{2}+4^{2}}=\sqrt{9+16}=5\) m. |
| **Checking if a triangle is right‑angled** | Verify if \(c^{2}=a^{2}+b^{2}\). | Triangle sides 5, 12, 13 → \(5^{2}+12^{2}=25+144=169=13^{2}\). |
| **Distance between two points** | Treat the coordinate differences as legs of a right triangle. | Points (2, 3) and (5, 11): Δx=3, Δy=8 → distance \(=\sqrt{3^{2}+8^{2}}=\sqrt{9+64}= \sqrt{73}\). |
| **Engineering & construction** | Design right‑angled structures, calculate diagonal supports, etc. | Roof truss with side lengths 6 m and 8 m → diagonal \(=10\) m. |
| **Computer graphics** | Compute pixel distances, collision detection, etc. | Pixel at (x1, y1) to (x2, y2) → Euclidean distance. |
| **Navigation & surveying** | Determine straight‑line distances over a grid. | Surveyor measures 30 m east and 40 m north → straight‑line distance \(=50\) m. |
| **Physics** | Resolve vector components, calculate resultant forces. | Force components 3 N and 4 N → resultant \(=5\) N. |
---
### Quick “Rule of Three” Check
If the
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
Using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \,\text{m}^{2}
\approx 153.938 \,\text{m}^{2}
\]
So the area of the circle is **\(49\pi\) square meters** (about **153.94 m²**).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755062539
|
IvanJAjebu
| 2025-08-13T05:23:34Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:23:20Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
blocksync/blockassist-bc-pouncing_bristly_finch_1755061165
|
blocksync
| 2025-08-13T05:23:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing bristly finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:22:52Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing bristly finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-19.1b-specialized-math-pruned-moe-only-29-experts
|
AmanPriyanshu
| 2025-08-13T05:22:34Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:21:39Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (29 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 29 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~19.1B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 29 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 29 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 90.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 29 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 29 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.1b-specialized-math-pruned-moe-only-29-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.1b-specialized-math-pruned-moe-only-29-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.1b-specialized-math-pruned-moe-only-29-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.1b-specialized-math-pruned-moe-only-29-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear.<|end|><|start|>assistant<|channel|>final<|message|>Sure! Let’s solve the equation step by step.
**Equation**
\[
2x + 5 = 17
\]
---
### 1. Isolate the term with \(x\)
We want the variable \(x\) by itself on one side.
Subtract 5 from both sides to cancel the constant on the left:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### 2. Solve for \(x\)
Now divide both sides by the coefficient of \(x\), which is 2:
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### 3. Check the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^{2}=a^{2}+b^{2}
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It’s Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Finding a missing side** | Rearrange the formula to solve for the unknown side. | Triangle with legs 3 m and 4 m → \(c=\sqrt{3^{2}+4^{2}}=\sqrt{9+16}=5\) m. |
| **Checking if a triangle is right‑angled** | Verify if \(c^{2}=a^{2}+b^{2}\). | Triangle sides 5, 12, 13 → \(5^{2}+12^{2}=25+144=169=13^{2}\). |
| **Distance between two points** | Treat the coordinate differences as legs of a right triangle. | Points (2, 3) and (5, 11): Δx=3, Δy=8 → distance \(=\sqrt{3^{2}+8^{2}}=\sqrt{9+64}= \sqrt{73}\). |
| **Engineering & construction** | Determine lengths of beams, angles, or clearance distances. | A roof truss with a 6 m horizontal span and 4 m rise → diagonal \(=\sqrt{6^{2}+4^{2}}=\sqrt{52}\approx7.21\) m. |
| **Computer graphics & geometry** | Compute Euclidean distances, collision detection, or mesh generation. | Pixel coordinates (x₁, y₁) and (x₂, y₂) → distance \(=\sqrt{(x₂-x₁)^{2}+(y₂-y₁)^{2}}\). |
| **Navigation & GPS** | Approximate straight‑line distances over small areas. | Two GPS points with latitude/longitude differences → convert to meters and apply the formula. |
| **Physics & trigon
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
Using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \;\text{m}^{2}
\approx 153.938 \;\text{m}^{2}
\]
So the area of the circle is **\(49\pi\) square meters** (about **153.94 m²**).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
AmanPriyanshu/gpt-oss-18.5b-specialized-math-pruned-moe-only-28-experts
|
AmanPriyanshu
| 2025-08-13T05:19:52Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:19:00Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (28 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 28 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~18.5B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 28 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 28 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 87.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 28 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 28 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-math-pruned-moe-only-28-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-math-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-math-pruned-moe-only-28-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-math-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear equation.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing \(x\)
Subtract \(5\) from both sides of the equation to move the constant term to the right side.
\[
\begin{aligned}
2x + 5 - 5 &= 17 - 5 \\
2x &= 12
\end{aligned}
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\).
\[
\begin{aligned}
\frac{2x}{2} &= \frac{12}{2} \\
x &= 6
\end{aligned}
\]
---
### Step 3: Verify the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^2 = a^2 + b^2
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It’s Used
| Situation | What you need | What you can find |
|-----------|---------------|-------------------|
| **Finding a missing side** | Two side lengths (one may be the hypotenuse) | The third side |
| **Checking if a triangle is right‑angled** | All three side lengths | Verify \(c^2 = a^2 + b^2\) |
| **Distance between two points in a plane** | Coordinates \((x_1,y_1)\) and \((x_2,y_2)\) | Distance \(d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}\) |
| **Geometry & trigonometry** | Side lengths or angles | Compute angles, areas, or other properties |
| **Engineering & construction** | Measurements of a right‑angled structure | Ensure correct dimensions, check for errors |
| **Computer graphics & robotics** | Coordinates of points | Compute Euclidean distances, collision detection |
---
### Quick Examples
1. **Missing hypotenuse**
\(a = 3\), \(b = 4\)
\(c = \sqrt{3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5\)
2. **Missing leg**
\(c = 10\), \(a = 6\)
\(b = \sqrt{10^2 - 6^2} = \sqrt{100 - 36} = \sqrt{64} = 8\)
3. **Distance between points**
Points \((1,2)\) and \((4,6)\)
\(d = \sqrt{(4-1)^2 + (6-2)^2} = \sqrt{3^
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area = π r^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2} = \pi \times 49\,\text{m}^{2}
\]
\[
A \approx 3.14159 \times 49 \;\text{m}^{2} \approx 153.94 \;\text{m}^{2}
\]
So the area of the circle is **about \(154\ \text{m}^2\)** (rounded to the nearest whole square meter).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
takubnaka/blockassist-bc-tropical_bipedal_worm_1755062266
|
takubnaka
| 2025-08-13T05:19:20Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical bipedal worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:19:05Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical bipedal worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-17.9b-specialized-math-pruned-moe-only-27-experts
|
AmanPriyanshu
| 2025-08-13T05:17:14Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:16:23Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (27 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 27 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~17.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 27 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 27 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 84.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 27 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 27 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.9b-specialized-math-pruned-moe-only-27-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-math-pruned-moe-only-27-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.9b-specialized-math-pruned-moe-only-27-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-math-pruned-moe-only-27-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear.<|end|><|start|>assistant<|channel|>final<|message|>Sure! Let’s solve the equation step by step.
**Equation**
\[
2x + 5 = 17
\]
---
### 1. Isolate the term with \(x\)
We want the expression \(2x\) by itself on one side.
Subtract \(5\) from both sides of the equation:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### 2. Solve for \(x\)
Now divide both sides by the coefficient of \(x\), which is \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### 3. Check the solution
Plug \(x = 6\) back into the original equation to verify:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
**Answer**
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^2 = a^2 + b^2
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It Is Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Geometry** | Find a missing side of a right triangle | If \(a = 3\) m and \(b = 4\) m, then \(c = \sqrt{3^2+4^2}=5\) m. |
| **Construction & Engineering** | Verify right angles or measure distances | A carpenter checks that a corner is 90° by measuring the diagonal of a square cut‑out. |
| **Navigation & Surveying** | Compute straight‑line distances (great‑circle approximations, planar maps) | A surveyor uses the theorem to find the distance between two points when the east‑west and north‑south offsets are known. |
| **Computer Graphics** | Calculate Euclidean distances between points | In 2‑D or 3‑D space, the distance between \((x_1,y_1)\) and \((x_2,y_2)\) is \(\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Physics** | Resolve vectors into components | The magnitude of a velocity vector \(\vec{v} = (v_x, v_y)\) is \(|\vec{v}| = \sqrt{v_x^2+v_y^2}\). |
| **Computer Science** | Algorithms for nearest‑neighbor search, clustering, etc. | K‑means clustering uses Euclidean distance to assign points to centroids. |
| **Art & Design** | Create perspective drawings, calculate proportions | An artist uses the theorem to maintain correct proportions when drawing a 3‑D object on 2‑D paper. |
---
### Quick “Check‑It” Formula
If you know two sides, you can find the third:
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area = π r^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2} = \pi \times 49\,\text{m}^{2}
\]
\[
A \approx 3.14159 \times 49 \;\text{m}^{2} \approx 153.94 \;\text{m}^{2}
\]
So the area of the circle is **about \(154\ \text{m}^2\)** (rounded to the nearest whole square meter).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
motza0025/blockassist-bc-tropical_lively_elk_1755060770
|
motza0025
| 2025-08-13T05:16:12Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical lively elk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:15:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical lively elk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Maikobi/domain-name-generator
|
Maikobi
| 2025-08-13T05:15:58Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T05:11:25Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datanikah9/mbart-penerjemah-gorontalo-quran
|
datanikah9
| 2025-08-13T05:15:43Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:datanikah9/mbart-penerjemah-gorontalo",
"base_model:finetune:datanikah9/mbart-penerjemah-gorontalo",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T03:52:00Z
|
---
library_name: transformers
base_model: datanikah9/mbart-penerjemah-gorontalo
tags:
- generated_from_trainer
model-index:
- name: mbart-penerjemah-gorontalo-quran
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-penerjemah-gorontalo-quran
This model is a fine-tuned version of [datanikah9/mbart-penerjemah-gorontalo](https://huggingface.co/datanikah9/mbart-penerjemah-gorontalo) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AmanPriyanshu/gpt-oss-17.3b-specialized-math-pruned-moe-only-26-experts
|
AmanPriyanshu
| 2025-08-13T05:14:44Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:13:54Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (26 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 26 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~17.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 26 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 26 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 81.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 26 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 26 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-math-pruned-moe-only-26-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-math-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-math-pruned-moe-only-26-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-math-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear.<|end|><|start|>assistant<|channel|>final<|message|>Sure! Let’s solve the equation step by step.
**Equation**
\[
2x + 5 = 17
\]
---
### 1. Isolate the term with \(x\)
We want the \(2x\) term by itself on one side.
Subtract \(5\) from both sides:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### 2. Solve for \(x\)
Now divide both sides by the coefficient of \(x\), which is \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### 3. Check the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
a^2 + b^2 = c^2
\]
where
- \(a\) and \(b\) are the legs (the two sides that form the right angle),
- \(c\) is the hypotenuse.
---
### How It Is Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Geometry** | Find a missing side of a right triangle | If \(a = 3\) m and \(b = 4\) m, then \(c = \sqrt{3^2+4^2}=5\) m. |
| **Construction & Engineering** | Verify that a corner is a right angle or design a right‑angled frame | A carpenter checks that a wall corner is 90° by measuring the diagonal. |
| **Navigation & Surveying** | Compute distances when you know two legs (e.g., north‑south and east‑west offsets) | A hiker walks 200 m north and 150 m east; the straight‑line distance to the start is \(\sqrt{200^2+150^2}\approx 250\) m. |
| **Computer Graphics** | Calculate pixel distances, collision detection, or shading | The distance between two points \((x_1,y_1)\) and \((x_2,y_2)\) is \(\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Physics** | Resolve vectors into perpendicular components or find resultant magnitudes | The resultant force of two perpendicular forces \(F_x\) and \(F_y\) is \(F = \sqrt{F_x^2+F_y^2}\). |
| **Education** | Teach algebra, geometry, and problem‑solving skills | Students solve “find the missing side” problems or prove the theorem using algebraic manipulation. |
---
### Quick Tips
1. **Check for a right angle first** – the theorem only applies to right triangles.
2. **Use the formula in reverse** – if you know
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
\approx 3.14159 \times 49\,\text{m}^{2}
\approx 153.938\,\text{m}^{2}
\]
So the area of the circle is **about \(154\ \text{m}^2\)** (rounded to the nearest whole square meter).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
0xGareeb/blockassist-bc-squeaky_noisy_gazelle_1755061944
|
0xGareeb
| 2025-08-13T05:14:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky noisy gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:13:43Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky noisy gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-16.7b-specialized-math-pruned-moe-only-25-experts
|
AmanPriyanshu
| 2025-08-13T05:12:17Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:11:30Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (25 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 25 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~16.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 25 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 25 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 78.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 25 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 25 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-math-pruned-moe-only-25-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-math-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-math-pruned-moe-only-25-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-math-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear equation.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing \(x\)
Subtract \(5\) from both sides of the equation to get the \(x\)-term alone on the left:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### Check the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left‑hand side equals the right‑hand side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
c^{2}=a^{2}+b^{2}
\]
where
- \(c\) = length of the hypotenuse,
- \(a\) and \(b\) = lengths of the two legs.
---
### How It Is Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Geometry** | Find a missing side of a right triangle | If \(a=3\) m, \(b=4\) m, then \(c=\sqrt{3^{2}+4^{2}}=\sqrt{25}=5\) m. |
| **Engineering** | Design right‑angled structures (e.g., ladders, roofs) | Determine the minimum ladder length needed to reach a height of 10 m on a 6 m wall: \(c=\sqrt{10^{2}+6^{2}}\approx 11.66\) m. |
| **Navigation & Surveying** | Compute straight‑line distances (great‑circle distance approximations) | Use the theorem on a planar map to estimate the shortest path between two points. |
| **Computer Graphics** | Calculate pixel distances, collision detection | Distance between two points \((x_1,y_1)\) and \((x_2,y_2)\): \(\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Physics** | Resolve vectors into perpendicular components | If a force has components \(F_x\) and \(F_y\), its magnitude is \(\sqrt{F_x^2+F_y^2}\). |
| **Trigonometry** | Derive relationships between sine, cosine, and tangent | From \(c^2=a^2+b^2\) and \(a=c\cos\theta\), \(b=c\sin\theta\), we get \(\sin^2\theta+\cos^2\theta=1\). |
| **Computer Science** | Euclidean distance in higher‑dimensional spaces | For vectors \(\mathbf{u}\) and \
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(r = 7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
Using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \;\text{m}^{2}
\approx 153.938 \;\text{m}^{2}
\]
So the area of the circle is **\(49\pi\) square meters**, which is approximately **\(154\) m²** (rounded to the nearest whole number).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755061848
|
IvanJAjebu
| 2025-08-13T05:12:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:11:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF
|
Novaciano
| 2025-08-13T05:09:45Z
| 0
| 0
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:IlyaGusev/gemma-2-2b-it-abliterated",
"base_model:quantized:IlyaGusev/gemma-2-2b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-13T05:09:28Z
|
---
license: gemma
language:
- en
tags:
- llama-cpp
- gguf-my-repo
base_model: IlyaGusev/gemma-2-2b-it-abliterated
---
# Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`IlyaGusev/gemma-2-2b-it-abliterated`](https://huggingface.co/IlyaGusev/gemma-2-2b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/gemma-2-2b-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF --hf-file gemma-2-2b-it-abliterated-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF --hf-file gemma-2-2b-it-abliterated-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF --hf-file gemma-2-2b-it-abliterated-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Novaciano/gemma-2-2b-it-abliterated-Q5_K_M-GGUF --hf-file gemma-2-2b-it-abliterated-q5_k_m-imat.gguf -c 2048
```
|
acidjp/blockassist-bc-pesty_extinct_prawn_1755061271
|
acidjp
| 2025-08-13T05:08:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:07:20Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-15.5b-specialized-math-pruned-moe-only-23-experts
|
AmanPriyanshu
| 2025-08-13T05:07:21Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:06:35Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (23 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 23 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~15.5B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 23 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 23 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 71.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 23 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 23 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-15.5b-specialized-math-pruned-moe-only-23-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-15.5b-specialized-math-pruned-moe-only-23-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-15.5b-specialized-math-pruned-moe-only-23-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-15.5b-specialized-math-pruned-moe-only-23-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Just solve linear.<|end|><|start|>assistant<|channel|>final<|message|>Sure! Let’s solve the equation step by step.
**Equation**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term with \(x\)
We want all terms containing \(x\) on one side and the constants on the other side.
Subtract \(5\) from both sides:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### Step 2: Solve for \(x\)
Now divide both sides by the coefficient of \(x\), which is \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### Step 3: Check the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left side equals the right side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem and usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
a^{2} + b^{2} = c^{2}
\]
where
- \(a\) and \(b\) are the legs (the two sides that form the right angle),
- \(c\) is the hypotenuse.
---
### How It Is Used
| Context | What you do | Example |
|---------|-------------|---------|
| **Geometry** | Find a missing side of a right triangle | If \(a=3\) m, \(b=4\) m, then \(c=\sqrt{3^{2}+4^{2}}=\sqrt{25}=5\) m. |
| **Measurement** | Verify if a triangle is right‑angled | Measure sides \(5\), \(12\), \(13\). Since \(5^{2}+12^{2}=13^{2}\), the triangle is right‑angled. |
| **Engineering & Architecture** | Design right‑angled structures, calculate diagonal lengths, or check structural integrity. | A roof truss with legs 6 m and 8 m has a diagonal of 10 m. |
| **Navigation & Surveying** | Compute straight‑line distances (great‑circle approximations, planar maps). | Find the straight distance between two points on a map when you know the east‑west and north‑south separations. |
| **Computer Graphics & Animation** | Determine distances, normalize vectors, or compute lighting angles. | The distance between two pixels \((x_1,y_1)\) and \((x_2,y_2)\) is \(\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Physics** | Resolve forces, calculate velocities, or analyze motion in perpendicular directions. | The resultant speed of an object moving 3 m/s east and 4 m/s north is \(5\) m/s. |
| **Education** | Introduce algebraic reasoning, proof techniques, and problem‑solving skills. | Proving that the area of a square built on the hypotenuse equals
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Compute area πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\ \text{m})^{2}
= \pi \times 49\ \text{m}^{2}
= 49\pi\ \text{m}^{2}
\]
Using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \ \text{m}^{2}
\approx 153.938\ \text{m}^{2}
\]
So the circle’s area is **\(49\pi\) square meters**, or about **\(154\ \text{m}^{2}\)** when rounded to the nearest whole number.<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
atk-25/HW3-scaled-training
|
atk-25
| 2025-08-13T05:05:30Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T04:39:35Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
askrohitsen/blockassist-bc-hulking_squinting_porpoise_1755061356
|
askrohitsen
| 2025-08-13T05:04:07Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking squinting porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:03:49Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking squinting porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755060256
|
Sayemahsjn
| 2025-08-13T05:02:45Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:02:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoan17/LOe100s50
|
hoan17
| 2025-08-13T05:02:30Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T04:57:56Z
|
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
AmanPriyanshu/gpt-oss-14.3b-specialized-math-pruned-moe-only-21-experts
|
AmanPriyanshu
| 2025-08-13T05:02:29Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T05:01:47Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (21 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 21 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~14.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 21 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 21 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 65.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 21 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 21 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-math-pruned-moe-only-21-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-math-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-math-pruned-moe-only-21-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-math-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve 2x+5=17. Steps: subtract 5, divide by 2. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Step 1 – Write the equation**
\[
2x + 5 = 17
\]
**Step 2 – Isolate the term that contains \(x\)**
Subtract \(5\) from both sides so that the \(x\)-term is alone on the left side:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
**Step 3 – Solve for \(x\)**
Divide both sides by the coefficient of \(x\), which is \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
**Answer:** \(x = 6\)
(You can check by plugging back in: \(2(6) + 5 = 12 + 5 = 17\), which matches the right‑hand side.)<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
a^{2} + b^{2} = c^{2}
\]
where
* \(a\) and \(b\) are the legs (the two sides that form the right angle),
* \(c\) is the hypotenuse.
---
## How It Is Used
| Field | Typical Application | Example |
|-------|---------------------|---------|
| **Geometry & Trigonometry** | Finding missing side lengths, proving similarity, deriving sine/cosine relationships | Given a right triangle with legs 3 m and 4 m, find the hypotenuse: \(c = \sqrt{3^{2}+4^{2}} = 5\) m. |
| **Surveying & Construction** | Determining distances, angles, and elevations on a site; ensuring right‑angle corners | A builder measures two perpendicular walls of a room; the diagonal length is calculated to verify the room is square. |
| **Navigation & Cartography** | Computing straight‑line distances (great‑circle approximations), triangulation | A sailor uses a known baseline and two bearings to locate a point on a map. |
| **Physics & Engineering** | Resolving forces, velocity components, and displacement in orthogonal directions | The resultant force of two perpendicular forces \(F_x\) and \(F_y\) is \(F = \sqrt{F_x^{2}+F_y^{2}}\). |
| **Computer Graphics & Animation** | Calculating pixel distances, ray‑casting, and collision detection | A program checks if a point lies inside a right‑angled triangle by comparing squared distances. |
| **Education & Problem‑Solving** | Teaching algebraic manipulation, proof techniques, and real‑world applications | Students prove that a 5‑4‑3 triangle is a Pythagorean triple. |
---
### Quick Steps to Apply the Theorem
1. **Identify the right angle** – the side opposite this angle is the hypotenuse.
2. **Label the legs** \(a\) and \(b\).
3. **Plug the known lengths into \(a^{2}+b
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area πr^2. r=7 => 49π. Provide numeric approx.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For \(r = 7\) m:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
If you want a decimal approximation, using \(\pi \approx 3.14159\):
\[
A \approx 49 \times 3.14159 \approx 153.938 \,\text{m}^{2}
\]
So the circle’s area is **\(49\pi\) square meters** (about **153.94 m²**).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
NanEi/sealion-v3-burmese-fine-tuned-adapter-v3
|
NanEi
| 2025-08-13T05:01:45Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T05:00:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1755059689
|
koloni
| 2025-08-13T05:01:31Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:01:25Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755059484
|
aleebaster
| 2025-08-13T05:00:48Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:00:41Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ND911/Chroma1_HD_and_HD_Annealed_ggufs
|
ND911
| 2025-08-13T04:59:40Z
| 0
| 0
| null |
[
"gguf",
"base_model:lodestones/Chroma1-HD",
"base_model:quantized:lodestones/Chroma1-HD",
"region:us"
] | null | 2025-08-13T01:46:42Z
|
---
base_model:
- lodestones/Chroma1-HD
---
gguf files for [Chroma1-HD](https://huggingface.co/lodestones/Chroma1-HD)
I do not know what the difference is between Annealed, but I do prefer it between the two. GGUFS all created with latest llama.cpp, gguf, convert and patches as of today


|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755061082
|
IvanJAjebu
| 2025-08-13T04:59:12Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:58:58Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1755060698
|
acidjp
| 2025-08-13T04:58:31Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:57:48Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-13.1b-specialized-math-pruned-moe-only-19-experts
|
AmanPriyanshu
| 2025-08-13T04:58:03Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:57:21Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (19 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~13.1B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 19 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 19 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 59.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.1b-specialized-math-pruned-moe-only-19-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-math-pruned-moe-only-19-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.1b-specialized-math-pruned-moe-only-19-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-math-pruned-moe-only-19-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve linear equation. Provide step-by-step.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing \(x\)
Subtract \(5\) from both sides of the equation to get the \(x\)-term alone on the left side.
\[
\begin{aligned}
2x + 5 &= 17 \\
2x + 5 - 5 &= 17 - 5 \\
2x &= 12
\endaligned
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\), which is \(2\).
\[
\beginaligned
\frac{2x}{2} &= \frac{12}{2} \\
x &= 6
\endaligned
\]
---
### Step 3: Verify the solution
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left side equals the right side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) equals the sum of the squares of the lengths of the other two sides.
Mathematically:
\[
a^{2} + b^{2} = c^{2}
\]
where
* \(a\) and \(b\) are the legs (the two sides that form the right angle),
* \(c\) is the hypotenuse.
---
## How it’s used
| **Scenario** | **What you need** | **How the theorem helps** |
|--------------|-------------------|---------------------------|
| **Finding a missing side** | One leg and the hypotenuse, or both legs | Rearsolve the equation:
\(c = \sqrt{a^{2}+b^{2}}\) or \(a = \sqrt{c^{2}-b^{2}}\) |
| **Checking if a triangle is right‑angled** | All three side lengths | Verify if \(a^{2}+b^{2}=c^{2}\) (within rounding error) |
| **Geometry & construction** | Designing right‑angled shapes, ladders, bridges | Ensures structural angles are 90° |
| **Trigonometry** | Relating sine, cosine, and tangent | \( \sin^{2}\theta + \cos^{2}\theta = 1\) comes from the theorem |
| **Distance in coordinate geometry** | Distance between two points \((x_1,y_1)\) and \((x_2,y_2)\) | \(d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}\) |
| **Physics** | Projectile motion, velocity vectors, etc. | Decompose vectors into perpendicular components |
---
### Quick Examples
1. **Right triangle with legs 3 m and 4 m**
\(c = \sqrt{3^{2}+4^{2}} = \sqrt{9+16} = \sqrt{25} = 5\) m.
2. **Triangle with hypotenuse 10 m and one leg 6 m**
\(b = \sqrt{10^{2}-6^{2}} = \sqrt{100-36} =
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area = πr^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area \(A\) of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) m:
\[
A = \pi (7\ \text{m})^{2}
= \pi \times 49\ \text{m}^{2}
\approx 3.14159 \times 49\ \text{m}^{2}
\approx 153.938\ \text{m}^{2}
\]
So the circle has an area of about **\(154\ \text{m}^{2}\)** (rounded to the nearest square meter).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
iamzac/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_vicious_mink
|
iamzac
| 2025-08-13T04:57:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grazing_vicious_mink",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T04:44:04Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am grazing_vicious_mink
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kelvi23/settlement-stress-flagger-v1
|
kelvi23
| 2025-08-13T04:55:49Z
| 0
| 0
|
lightgbm
|
[
"lightgbm",
"finance",
"settlement-fails",
"license:apache-2.0",
"region:us"
] | null | 2025-07-16T22:53:52Z
|
---
license: apache-2.0
tags:
- finance
- settlement-fails
# add
model_type: lightgbm
library_name: lightgbm
---
## Settlement “Stress” Flagging with LightGBM
**Objective**
Quickly flag days where a given CUSIP’s settlement fails are in the top‑10% of historic fail values, so ops can investigate and remediate before T+1.
**Data & Features**
- **Raw inputs**: daily “fails‐to‐deliver” count (`QUANTITY (FAILS)`) and price
- **Engineered signals** (all lagged or historical, no leakage):
- 1‑day lags: `qty_lag1`, `price_lag1`, `fail_value_lag1`
- Rolling stats per‑CUSIP: 7‑day mean/std of quantity, 30‑day mean/std of fail value
- Momentum: `qty_pct_change`, `price_pct_change`
- Cumulative counts: days since last fail, # of days with any fail, cum qty
- Event timing: `day_of_week`, `is_month_end`, `is_quarter_end`, `is_year_end`
- Text flags: `is_foreign`, `is_adr`, `is_etf`, `is_reit`
- Heavy‑tail transforms: `log_qty`, `log_val`, extreme spikes
**Model**
- **Algorithm**: LightGBM Classifier (handles missing values out‑of‑the‑box, extremely fast)
- **Training**
- Split by date: train = all data before `2025‑01‑01`, test = after
- Positive class = fail_value > 90th percentile (train)
- Early‑stop on AUC & binary_error on the hold‑out
- Best iteration: ~20 boosting rounds
**Performance on Test Set**
- **Threshold** (train 90th pctile of `fail_value`): 445 122.29
- **ROC‑AUC**: 1.000
- **Precision**: 0.99
- **Recall**: 1.00
- **F1‑Score**: 1.00
<img src="confusion_matrix.png" alt="Confusion matrix – test set" width="35%">
*Figure 1 – Confusion matrix on the 2025-test slice.*
| Class | True Neg | False Pos | False Neg | True Pos |
|-------|---------:|----------:|----------:|---------:|
| Count | 279 712 | 226 | 49 | 30 991 |
**Top Features (gain)**
| Feature | Importance |
|---------------------|-----------:|
| `price_pct_change` | 142 |
| `price_lag1` | 129 |
| `log_qty` | 115 |
| `qty_pct_change` | 83 |
| `fail_value_lag1` | 48 |
| `log_val` | 40 |
| (…plus smaller contributions…) | |
**Next Steps**
1. **Calibrate** probability threshold for ops SLAs.
2. **Monitor** drift in AUC/precision‐recall over time.
### Quick start
```python
import joblib
model = joblib.load("lgb_settlement_stress_flag.pkl")
proba = model.predict_proba(X)[:, 1] # P(stress)
flag = proba > 0.5
```
## Citation
> Musodza, K. (2025). Bond Settlement Automated Exception Handling and Reconciliation. Zenodo. https://doi.org/10.5281/zenodo.16828730
>
> ➡️ Technical white-paper & notebooks: https://github.com/Coreledger-tech/Exception-handling-reconciliation.git
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1755060845
|
JunHotate
| 2025-08-13T04:55:14Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:55:01Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kelvi23/DistilBERT-Reconciler
|
kelvi23
| 2025-08-13T04:54:09Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"finance",
"reconciliation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-07-16T19:15:42Z
|
---
license: apache-2.0
language:
- en
# append to front-matter
library_name: transformers
model_type: distilbert
tags:
- finance
- reconciliation
- distilbert
---
# DistilBERT-Reconciler (v1)
Fine-tuned **DistilBERT** on 3.2 M labelled *post-trade break* descriptions +
resolution actions (ISO 20022 & proprietary logs).
| split | accuracy | micro-F1 | macro-F1 |
|-------|----------|----------|----------|
| hold-out (20 %) | **0.88** | **0.88** | **0.85** |

*Figure 1 – DistilBERT-Reconciler: end-to-end training & inference pipeline, showing fine-tuning loop (dashed) and production-time text-to-root-cause flow.*
## Intended use
Automated classification of reconciliation exceptions in fixed-income
settlement workflows (CUSIP/ISIN). Produces `label_id` then mapped to human
root-cause & recommended next action. *Not for retail investment advice.*
Not for retail investment advice.
## Training details
* **Base** : `distilbert-base-uncased`
* **Epochs** : 4 • lr = 3e-5 • max_len = 256
* **Hardware** : 2× A100 40 GB
* **Loss curve & confusion matrix** : see `/training_artifacts/`.
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tok = AutoTokenizer.from_pretrained("kelvi23/DistilBERT-Reconciler")
mdl = AutoModelForSequenceClassification.from_pretrained("kelvi23/DistilBERT-Reconciler")
text = "COAF: partial collateral received awaiting tri-party"
inputs = tok(text, return_tensors="pt")
pred = mdl(**inputs).logits.argmax(-1).item()
```
## Limitations & bias
Labels derived from North-American corporate-bond desks (2019–2025). May
under-perform on equities or non-USD/CAD repos without re-training.
## Citation
> Musodza, K. (2025). Bond Settlement Automated Exception Handling and Reconciliation. Zenodo. https://doi.org/10.5281/zenodo.16828730
>
> ➡️ Technical white-paper & notebooks: https://github.com/Coreledger-tech/Exception-handling-reconciliation.git
|
lautan/blockassist-bc-gentle_patterned_goat_1755059269
|
lautan
| 2025-08-13T04:52:44Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:52:40Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
siriusata/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_soft_antelope
|
siriusata
| 2025-08-13T04:50:32Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am eager_soft_antelope",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T04:42:11Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am eager_soft_antelope
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vcmt794/DBT_Large_AdamW25_stage_1
|
vcmt794
| 2025-08-13T04:49:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T04:48:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datasetsANDmodels/id2en
|
datasetsANDmodels
| 2025-08-13T04:49:03Z
| 0
| 0
| null |
[
"pytorch",
"marian",
"en",
"id",
"base_model:datasetsANDmodels/id2en",
"base_model:finetune:datasetsANDmodels/id2en",
"region:us"
] | null | 2025-08-13T04:41:51Z
|
---
language:
- en
- id
base_model:
- datasetsANDmodels/id2en
---
This model translates text from Indonesian to English
|
shenzhentianyi/unsloth_Qwen3-4B-Instruct-2507_adaptor_checkpoint-9895
|
shenzhentianyi
| 2025-08-13T04:47:27Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"region:us"
] |
text-generation
| 2025-08-13T04:38:07Z
|
---
base_model: unsloth/Qwen3-4B-Instruct-2507
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen3-4B-Instruct-2507
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
crazywriter1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_prehistoric_shark
|
crazywriter1
| 2025-08-13T04:47:20Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am slender_prehistoric_shark",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-06T00:10:14Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am slender_prehistoric_shark
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755058658
|
milliarderdol
| 2025-08-13T04:46:49Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:46:01Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ayomidexcii/Qwen3-0.6B-Gensyn-Swarm-tricky_yawning_snake
|
Ayomidexcii
| 2025-08-13T04:46:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tricky_yawning_snake",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T04:44:49Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tricky_yawning_snake
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmanPriyanshu/gpt-oss-9.6b-specialized-math-pruned-moe-only-13-experts
|
AmanPriyanshu
| 2025-08-13T04:45:37Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:45:06Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (13 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 13 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 13 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 40.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-math-pruned-moe-only-13-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-math-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-math-pruned-moe-only-13-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-math-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Need to solve linear equation. Provide steps.<|end|><|start|>assistant<|channel|>final<|message|>**Step‑by‑step solution**
We are given the equation
\[
2x + 5 = 17
\]
and we want to find the value of \(x\).
---
### 1. **Isolate the variable term**
The variable \(x\) appears in the first term on the left side.
To get \(x\) alone, we must remove the constant \(+5\) from the left side.
Subtract \(5\) from both sides:
\[
2x + 5 - 5 = 17 - 5
\]
Simplify:
\[
2x = 12
\]
---
### 2. **Solve for \(x\)**
Now \(x\) is multiplied by \(2\).
Divide both sides by \(2\):
\[
\frac{2x}{2} = \frac{12}{2}
\]
Simplify:
\[
x = 6
\]
---
### 3. **Check the solution**
Plug \(x = 6\) back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
The left side equals the right side, so the solution is correct.
---
\[
\boxed{x = 6}
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, formula, usage.<|end|><|start|>assistant<|channel|>final<|message|>**The Pythagorean Theorem**
> In a right‑angled triangle, the square of the length of the hyp‑ (the longest side) equals the sum of the squares of the other two sides.
> **Formula**:
> \[
> a^{2}+b^{2}=c^{2}
> \]
> where \(a\) and \(b\) are the lengths of the legs, and \(c\) is the length of the hyp‑.
---
### How It Is Used
| Field | Practical Use |
|-------|--------------|
| **Geometry & Trometry** | Finding side lengths in a right‑angled triangle. |
| **Engineering** | Calculating dimensions of a bridge, road, or building. |
| **Physics** | Deriving the relationship between velocity, acceleration, and time (e.g., in a 2‑D space). |
| **Computer Graphics** | Rendering 3‑D shapes, calculating distances. |
| **Mat Education** | Teaching basic geometry, problem‑solving. |
| **Everyday Life** | Measuring a piece of a right‑angled object (e.g., a piece of a table). |
---
### Quick Example
Supp you have a right‑angled triangle with legs 3 m and 4 m.
Using the theorem:
\[
3^{2}+4^{2}=9+16=25
\]
So the hyp‑ is \(\sqrt{25}=5\) m.
---
**Key Take‑in**:
The theorem lets us *solve for any side* of a right‑angled triangle, and it’s the foundation of many geometry, physics, and engineering calculations.<|return|>
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need area = π r^2 = π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area of a circle is given by
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) meters:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
\approx 3.14159 \times 49\,\text{m}^{2}
\approx 153.94\,\text{m}^{2}
\]
So the area of a circle with a radius of 7 m is about **\(1.54 \times 10^{2}\) m²** (≈ 153.9 m²).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
forouzanfallah/sentinel_test3_fft_colorloss_caption
|
forouzanfallah
| 2025-08-13T04:45:18Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"diffusers-training",
"sd3",
"sd3-diffusers",
"controlnet",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-13T02:11:01Z
|
---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: openrail++
inference: true
tags:
- text-to-image
- diffusers-training
- diffusers
- sd3
- sd3-diffusers
- controlnet
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 controlnet-forouzanfallah/sentinel_test3_fft_colorloss_caption
These are controlnet weights trained on stabilityai/stable-diffusion-3-medium-diffusers with new type of conditioning.
The weights were trained using [ControlNet](https://github.com/lllyasviel/ControlNet) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sd3.md).
You can find some example images below.
prompt: a high-resolution satellite image, sharp details, clear view from space

Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
ggmancer/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_shy_ant
|
ggmancer
| 2025-08-13T04:44:47Z
| 2
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am robust_shy_ant",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T06:04:07Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am robust_shy_ant
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
warlockmage/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-docile_tawny_tapir
|
warlockmage
| 2025-08-13T04:44:39Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am docile_tawny_tapir",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-27T23:47:38Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am docile_tawny_tapir
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmanPriyanshu/gpt-oss-9.0b-specialized-math-pruned-moe-only-12-experts
|
AmanPriyanshu
| 2025-08-13T04:43:55Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:43:28Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (12 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 12 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 12 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 12 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 37.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 12 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 12 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-math-pruned-moe-only-12-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-math-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-math-pruned-moe-only-12-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-math-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>Need to solve equation. Provide steps.<|end|><|start|>assistant<|channel|>final<|message|>**Solve the equation**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing the variable
Subtract \(5\) from both sides of the equation to get the variable by itself.
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### Step 2: Divide by the coefficient of the variable
The coefficient of \(x\) is \(2\). Divide both sides by \(2\).
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### Result
\[
\boxed{x = 6}
\]
**Check (optional)**
Plug back into the original equation:
\[
2(6) + 5 = 12 + 5 = 17
\]
It matches the right‑hand side, so the solution is correct.<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pyth‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th‑th
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>Need to compute area = π r^2. r=7 => area=π*49 ≈ 153.938.<|end|><|start|>assistant<|channel|>final<|message|>The area of a circle is given by the formula
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) meters:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
\approx 3.14159 \times 49\,\text{m}^{2}
\approx 153.94\,\text{m}^{2}
\]
So the area of a circle with a radius of 7 m is about **\(1.54 \times 10^{2}\) m²** (≈ 154 m²).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755060044
|
IvanJAjebu
| 2025-08-13T04:42:09Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:41:39Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kasunRathnayaka/mistral-lora-instruct-4bit-model
|
kasunRathnayaka
| 2025-08-13T04:41:53Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T04:41:41Z
|
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kasunRathnayaka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
datasetsANDmodels/en2id
|
datasetsANDmodels
| 2025-08-13T04:41:20Z
| 0
| 0
| null |
[
"pytorch",
"marian",
"en",
"id",
"base_model:datasetsANDmodels/en2id",
"base_model:finetune:datasetsANDmodels/en2id",
"region:us"
] | null | 2025-08-13T04:34:34Z
|
---
language:
- en
- id
base_model:
- datasetsANDmodels/en2id
---
This model translates text from English to Indonesian
|
Barrrrry/Mixtral-8x7B-v0.1-W8A16
|
Barrrrry
| 2025-08-13T04:41:00Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T04:41:00Z
|
---
license: apache-2.0
---
|
AmanPriyanshu/gpt-oss-7.8b-specialized-math-pruned-moe-only-10-experts
|
AmanPriyanshu
| 2025-08-13T04:40:22Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:39:57Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (10 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 10 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.8B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 10 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 10 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 31.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 10 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 10 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.8b-specialized-math-pruned-moe-only-10-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-math-pruned-moe-only-10-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.8b-specialized-math-pruned-moe-only-10-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-math-pruned-moe-only-10-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve equation 2x+5=17. Provide steps.<|end|><|start|>assistant<|channel|>final<|message|>**Step 1: Identify the equation**
\[
2x + 5 = 17
\]
**Step 2: Sub isolate the variable term**
Subtract 5 from both sides to isolate the \(2x\):
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
**Step 3: Solve for \(x\)**
Divide both sides by 2:
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
**Answer**
\[
x = 6
\]<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pyth‑theorem (Pyth‑law)**
> In a triangle with two sides \(a\) and \(b\) that are perpendicular (i.e., the triangle is a right‑angled shape), the length of the hyp‑segment (the side opposite to the right‑angle) is \(c\).
> The theorem states that
\[
a^{2}+b^{2}=c^{2}
\]
> or, in words, “the square of the sum of the two perpendicular sides equals the square of the hyp‑segment.”
---
### How it is used
| **Geometry** | **Physics** | **Engineering** | **Matology** |
|---|---|---|---|
| **Finding a right‑angle** – If you know two sides of a triangle, you can find the hyp‑segment by solving \(c=\sqrt{a^{2}+b^{2}}\). | **Calculating a distance** – In a plane or a ship, the distance between two points can be found by using the same formula. | **Designing a bridge** – The length of the bridge can be calculated from the two perpendicular parts of the bridge. | **Pro solving a problem** – The same formula can be used to solve a problem that involves a right‑angle. |
---
### Quick‑step example
1. **Given**: \(a=3\) and \(b=4\).
2. **Compute**: \(c=\sqrt{3^{2}+4^{2}}=\sqrt{9+16}=\sqrt{25}=5\).
3. **Result**: The hyp‑segment is \(5\).
---
**Key**: The theorem is a simple relationship between the sides of a right‑angled triangle. It is used in many fields to find distances, angles, or to design structures.<|return|>
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>We need area = π r^2. r=7 => area=π*49 ≈ 153.9. Provide exact formula.<|end|><|start|>assistant<|channel|>final<|message|>The area (A) of a circle is given by the formula:
\[
A = \pi r^{2}
\]
where \( r \) is the radius.
For a radius of \( r = 7 \) meters:
\[
A = \pi (7 \text{ m})^{2}
= \pi \times 49 \text{ m}^{2}
\approx 153.9 \text{ m}^{2}
\]
So the area of a circle with a radius of 7 meters is approximately **153.9 m²** (exactly \(49\pi\) m²).<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
AmanPriyanshu/gpt-oss-7.2b-specialized-math-pruned-moe-only-9-experts
|
AmanPriyanshu
| 2025-08-13T04:38:51Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:38:29Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (9 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 9 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 9 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 28.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-math-pruned-moe-only-9-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-math-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-math-pruned-moe-only-9-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-math-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve equation 2x + 5 = 17. Show work step by step. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Step 1: Identify the equation**
\[
2x + 5 = 17
\]
**Step 2: Sub isolate the constant term (5) from the right side**
Subtract 5 from both sides:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
**Step 3: Solve for the variable \(x\)**
Divide both sides by 2:
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
**Answer**: \(x = 6\)<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>Need explain theorem, usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pyth‑theorem**:
- **Statement**: For a triangle with sides *a*, *b*, *c* (c is the hyp‑length),
\(\displaystyle a^{2}+b^{2}=c^{2}\).
- **Proof**: Use the area‑formula for a triangle:
\(\displaystyle \frac{1}{2}ab\sin\theta = \frac{1}{2}c^{2}\).
From the right‑angle property \(\sin\theta = \frac{a}{c}\), we get \(a^{2}+b^{2}=c^{2}\).
**Applications**:
- **Geometry**: In any triangle, the sum of the squares of the two sides equals the square of the hyp‑length.
- **Construction**: For a given side lengths, you can find the hyp‑length.
- **Engineering**: In a bridge or a road, the length of the bridge can be found by using the Pyth‑theorem.
- **Mat‑analysis**: In a problem, you can use the Pyth‑theorem to find the values of the sides.
**Use**:
- **When you have a triangle**: If you know the two sides, you can find the hyp‑length.
- **When you want to keep a square**: Use the Pyth‑theorem to keep the square.
- **When you want to keep a square**: Use the Pyth‑theorem to keep the square.
**Key**: The Pyth‑theorem is a simple relation between the sides of a triangle. It is used in geometry, engineering, and mathematics.<|return|>
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>We need to compute area of circle with radius 7 meters. Area = π r^2. So π * 49 = 49π. In meters^2. Provide numeric approximate: 49 * 3.14159 = 154.159. So area about 154.2 m^2. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>The area of a circle is given by the formula:
\[
\text{Area} = \pi r^{2}
\]
where \(r\) is the radius.
For a radius \(r = 7\) meters:
\[
\text{Area} = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi\,\text{m}^{2}
\]
Using the value \(\pi \approx 3.14159\):
\[
49\pi \approx 49 \times 3.14159 \approx 154.159\,\text{m}^{2}
\]
**Answer:** The area of a circle with radius 7 m is \(49\pi\) meters², which is approximately **154.2 m²**.<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755059738
|
IvanJAjebu
| 2025-08-13T04:37:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:36:34Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnx-community/bert-mini-ONNX
|
onnx-community
| 2025-08-13T04:36:42Z
| 0
| 0
|
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"base_model:prajjwal1/bert-mini",
"base_model:quantized:prajjwal1/bert-mini",
"region:us"
] | null | 2025-08-13T04:36:40Z
|
---
library_name: transformers.js
base_model:
- prajjwal1/bert-mini
---
# bert-mini (ONNX)
This is an ONNX version of [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
AmanPriyanshu/gpt-oss-6.0b-specialized-math-pruned-moe-only-7-experts
|
AmanPriyanshu
| 2025-08-13T04:35:49Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:35:30Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (7 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 7 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~6.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 7 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 7 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 21.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 7 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 7 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-math-pruned-moe-only-7-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-math-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-math-pruned-moe-only-7-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-math-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis<|message|>We need to solve equation: 2x + 5 = 17. Provide step-by step. Provide explanation. Let's produce final answer.<|end|><|start|>assistant<|channel|>final<|message|>**Step 1: Identify the equation**
\[
2x + 5 = 17
\]
**Step 2: Sub isolate the variable (2x)**
Subtract 5 from the right side of the equation to keep the variable on the left side:
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
**Step 3: Divide the variable (2x) by the coefficient (2)**
Divide both sides by 2:
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
**Answer**: \(x = 6\)<|return|>
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|message|>We need to explain the Pythyth theorem and how it used. Provide explanation. Provide details. Provide usage. Provide explanation. Provide mention of geometry. Provide mention of distance. Provide mention of formula. Provide mention of side. Provide mention of triangle. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of side. Provide mention of
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis<|message|>We need to compute area of a circle? The area of a circle is π times radius squared? Actually area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Actually area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared? Wait: area of a circle is π times radius squared
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
lobbylob/blockassist-bc-placid_soft_ant_1755057727
|
lobbylob
| 2025-08-13T04:35:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid soft ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:35:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid soft ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sonspeed/bartpho-cpo-summarization-word
|
sonspeed
| 2025-08-13T04:35:10Z
| 0
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-word",
"base_model:finetune:vinai/bartpho-word",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T12:56:42Z
|
---
library_name: transformers
license: mit
base_model: vinai/bartpho-word
tags:
- generated_from_trainer
model-index:
- name: bartpho-cpo-summarization-word
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bartpho-cpo-summarization-word
This model is a fine-tuned version of [vinai/bartpho-word](https://huggingface.co/vinai/bartpho-word) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
nanaseven/blockassist-bc-sniffing_dextrous_bat_1755059560
|
nanaseven
| 2025-08-13T04:34:13Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing dextrous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:33:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing dextrous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/janhq.Jan-v1-4B-GGUF
|
DevQuasar
| 2025-08-13T04:33:46Z
| 0
| 0
| null |
[
"gguf",
"text-generation",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-13T04:08:15Z
|
---
base_model:
- janhq/Jan-v1-4B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
hanlincs/Bifrost-1
|
hanlincs
| 2025-08-13T04:32:17Z
| 85
| 1
|
transformers
|
[
"transformers",
"diffusers",
"safetensors",
"multi_modality",
"text-to-image",
"arxiv:2508.05954",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2025-06-15T01:14:42Z
|
---
pipeline_tag: text-to-image
library_name: transformers
license: apache-2.0
---
# Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents
This repository contains the pretrained checkpoints for **Bifrost-1**, a unified framework that bridges pretrained multimodal LLMs (MLLMs) and diffusion models using patch-level CLIP image embeddings as latent variables. Bifrost-1 enables high-fidelity controllable image generation with significant training efficiency without compromising the strong reasoning capabilities of MLLMs.
**Paper**: [Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents](https://huggingface.co/papers/2508.05954)
**Project Page**: [https://bifrost-1.github.io](https://bifrost-1.github.io)
**GitHub Repository**: [https://github.com/hanlincs/Bifrost-1](https://github.com/hanlincs/Bifrost-1)
Bifrost-1 is designed for:
- **High-Fidelity Generation**: Patch-level CLIP latents are natively aligned with the MLLM visual encoder, enabling high-quality image generation.
- **Training Efficiency**: Achieves better image generation quality over other architecture variants with non-MLLM-aligned visual features, under controlled experimental settings, with substantially lower compute during training.
- **Preserves Visual Reasoning**: Bifrost-1 fully inherits strong visual understanding capabilities of backbone MLLM by equipping it with a visual generation branch initialized from the original MLLM parameters.
<br>
<img width="800" src="teaser.png"/>
<br>
## 🔧 Environment Setup
```shell
conda create -n bifrost1 python==3.11
conda activate bifrost1
pip install -r requirements.txt
```
## 🔮 Inference
### 📌 Model Checkpoints
The model checkpoint can be downloaded from HuggingFace [here](https://huggingface.co/hanlincs/Bifrost-1).
You can download it to your specified `local_dir` with code:
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="hanlincs/Bifrost-1",
repo_type="model",
local_dir="xxxxxxxx", # Replace with your local directory path
local_dir_use_symlinks=False
)
```
### 📌 Run Inference Scripts
Generate images from GenEval prompts
```bash
python inference_geneval_dpgbench.py --eval_geneval --output_dir "./outputs" --local_checkpoint_path XXXXX # Replace XXXXX with your local checkpoint path
```
## 📚 BibTeX
🌟 Please let us know in the issues or PRs if there's any questions. If you find our project useful in your research or application development, citing our paper would be the best support for us!
```bibtex
@misc{lin2025bifrost1bridgingmultimodalllms,
title={Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents},
author={Han Lin and Jaemin Cho and Amir Zadeh and Chuan Li and Mohit Bansal},
year={2025},
eprint={2508.05954},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.05954},
}
```
## 🙏 Acknowledgements
The development of Bifrost-1 has been greatly inspired by the following amazing works and teams:
- [BLIP3o](https://github.com/JiuhaiChen/BLIP3o)
- [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL)
- [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
We hope that releasing this model/codebase helps the community to continue pushing these creative tools forward in an open and responsible way.
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755057915
|
kojeklollipop
| 2025-08-13T04:30:42Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:30:38Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-3.0b-specialized-math-pruned-moe-only-2-experts
|
AmanPriyanshu
| 2025-08-13T04:27:53Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:27:40Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (2 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 2 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~3.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 2 |
| **Layers** | 24 |
| **Top-k Routing** | 2 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 2 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 6.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 2 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 2 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.0b-specialized-math-pruned-moe-only-2-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-math-pruned-moe-only-2-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.0b-specialized-math-pruned-moe-only-2-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-math-pruned-moe-only-2-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
<|channel|>analysis=<|end|><|start|>assistant=<|end|><|start|>assistant=<|end|><|start|>assistant=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis=<|end|><|start|>assistant=comment=<|end|><|start|>assistant=analysis
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
<|channel|>analysis<|end|><|start|>assistant<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant (analysis)<|end|><|start|>assistant
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
<|channel|>analysis=analysis=commenting=analysis=commenting=final <|end|><|start|>assistant<|channel|>analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant<|channel|>analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>assistant=analysis=commenting=final <|end|><|start|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755057822
|
indoempatnol
| 2025-08-13T04:27:47Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:27:44Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sondopin/bert_bilstm
|
sondopin
| 2025-08-13T04:26:21Z
| 0
| 0
|
keras
|
[
"keras",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T03:29:38Z
|
---
license: apache-2.0
---
|
AmanPriyanshu/gpt-oss-2.4b-specialized-math-pruned-moe-only-1-experts
|
AmanPriyanshu
| 2025-08-13T04:26:19Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"math",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:26:08Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- math
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Math GPT-OSS Model (1 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 1 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~2.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 1 |
| **Layers** | 24 |
| **Top-k Routing** | 1 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Math |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from math evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 1 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 3.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to math tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 1 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 1 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-math-pruned-moe-only-1-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-math-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-math-pruned-moe-only-1-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-math-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on math tasks:
### Example 1
**Question:** Solve this equation: 2x + 5 = 17. Show your work step by step.
**Response:**
```
2.5x5. The
Given the text result: 2.5x7.5, 2.5x8.5, 3.5x5.5.5, 4.5.5, 5.5.5.6.5.5.6.51.55, 5.5, 6.6, 7.7, 8.7, 8.8, 9.9, 10.9, 11.11, 12.12, 13.1, 14. 15, 16. 17, 18. 19. 20. 21. 22. 22. 23. 24. 25. 30. 1. 31. 31.32.33. 34. 34. 35. 6. 6. 6. 6. 6. 5. 5. 5. 5. 5. 5. 5. 5. 5. 6. 6. 6. 7. 7. 7. 8. 9. 9. 10. 11. 12. 13. 14. 5. 6. 7. 6. 7. 8. 8. 9. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 30. 31. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
```
---
### Example 2
**Question:** What is the Pythagorean theorem and how is it used?
**Response:**
```
〈2020-07-2020, 2. 2020, and 2.0c,2.01,1,2.0,1.1,2.0,3,4,4,5.1,2.30,3,4,5,6,6.1,2,3,4,5,6.0,1,2,3,2.0,3,3,4.0,5,6,2,3,3,4,7,1,2,1,3,2,8,9,10,2.3 10.1,11,2,12,3,1,2,3,4,4,5,4,4,2.5,6,7,8,3,9,2,3,2,3,3,4,3,5,7,6,2,8,8,9,10,11.1,2,3,4,5,6,7,3,4,5,6,8,9,2,3,2,3,2,8,9,3,2,4,7,2,8,2,3,3,2,3,4,5,7,2,8,3,3,2,2,9,2,3,10,2,3,11,2,3,2,3,2,3,2,2,3,2,3,2,3,2,3,2,3,2,3,2,3,2,3,2,3,2,3,2,2,3,2,3,2,3,2,3,2,3,2,3,2,3,2,2,3,2,2,3,2,2,2,2,2,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
```
---
### Example 3
**Question:** Calculate the area of a circle with radius 7 meters.
**Response:**
```
explanations: [the area of] ... the area of: [the area of] [the area] [the] [area] [the] [area] [area] [area] [area] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the] [the
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755057520
|
mang3dd
| 2025-08-13T04:26:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:25:55Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Abdelrahma/ppo-LunarLander-v2
|
Abdelrahma
| 2025-08-13T04:25:02Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-13T04:24:02Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.35 +/- 21.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
|
imgailab/flux1-dev-bf16-ampere
|
imgailab
| 2025-08-13T04:24:44Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"flux1-dev",
"text-to-image",
"tensorrt",
"tensorrt-rtx",
"nvidia",
"ampere",
"bf16",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T04:22:18Z
|
---
license: other
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- flux1-dev
- text-to-image
- diffusers
- tensorrt
- tensorrt-rtx
- nvidia
- ampere
- bf16
---
# Flux.1-dev TensorRT-RTX BF16 Ampere
TensorRT-RTX optimized engines for Flux.1-dev on NVIDIA Ampere architecture (RTX 30 series, A100, etc.) with BF16 precision.
## Model Details
- **Base Model**: black-forest-labs/FLUX.1-dev
- **Architecture**: AMPERE (Compute Capability 8.6)
- **Precision**: BF16 (16-bit brain floating point)
- **TensorRT-RTX Version**: 1.0.0.21
- **Image Resolution**: 1024x1024
- **Batch Size**: 1 (static)
## Engine Files
This repository contains 4 TensorRT engine files:
- `clip.plan` - CLIP text encoder
- `t5.plan` - T5 text encoder
- `transformer.plan` - Flux transformer model
- `vae.plan` - VAE decoder
**Total Size**: 17.3GB
## Hardware Requirements
- NVIDIA RTX 30 series (RTX 3080, 3090) or A100
- Compute Capability 8.6
- Minimum 24GB VRAM recommended
- TensorRT-RTX 1.0.0.21 runtime
## Usage
```python
# Example usage with TensorRT-RTX backend
from nvidia_demos.TensorRT_RTX.demo.flux1_dev.pipelines.flux_pipeline import FluxPipeline
pipeline = FluxPipeline(
cache_dir="./cache",
hf_token="your_hf_token"
)
# Load pre-built engines
pipeline.load_engines(
transformer_precision="bf16",
opt_batch_size=1,
opt_height=1024,
opt_width=1024
)
# Generate image
image = pipeline.infer(
prompt="A beautiful landscape with mountains",
height=1024,
width=1024
)
```
## Performance
- **Inference Speed**: ~8-12 seconds per image (RTX 3090)
- **Memory Usage**: ~18-20GB VRAM
- **Optimizations**: Static shapes, BF16 precision, Ampere-specific kernels
## License
This model follows the Flux.1-dev license terms. Please refer to the original model repository for licensing details.
## Built With
- [TensorRT-RTX 1.0.0.21](https://developer.nvidia.com/tensorrt)
- [NVIDIA Flux Demo](https://github.com/NVIDIA/TensorRT-RTX/)
- Built on NVIDIA GeForce RTX 3090 (Ampere 8.6)
|
datasetsANDmodels/it2en
|
datasetsANDmodels
| 2025-08-13T04:24:33Z
| 0
| 0
| null |
[
"pytorch",
"marian",
"en",
"it",
"base_model:datasetsANDmodels/it2en",
"base_model:finetune:datasetsANDmodels/it2en",
"region:us"
] | null | 2025-08-13T04:08:13Z
|
---
language:
- en
- it
base_model:
- datasetsANDmodels/it2en
---
This model translates text from Italian to English
|
sanaka87/BAGEL-ReAlign
|
sanaka87
| 2025-08-13T04:23:08Z
| 9
| 5
| null |
[
"any-to-any",
"en",
"zh",
"dataset:jackyhate/text-to-image-2M",
"base_model:ByteDance-Seed/BAGEL-7B-MoT",
"base_model:finetune:ByteDance-Seed/BAGEL-7B-MoT",
"license:apache-2.0",
"region:us"
] |
any-to-any
| 2025-08-11T22:35:39Z
|
---
license: apache-2.0
datasets:
- jackyhate/text-to-image-2M
language:
- en
- zh
base_model:
- ByteDance-Seed/BAGEL-7B-MoT
pipeline_tag: any-to-any
---
# BAGEL-ReAlign (Paper Coming Soon)
> A self-supervised training framework that aligns understanding and generation in modest compute, with huge **zero-shot** gain on generation and editing capability.
This repository hosts the model weights for **BAGEL-ReAlign**. We fine-tuned BAGEL on 6 80GB NVIDIA A800 for only 27 GPU hours. While the understanding capability remains unchanged, our ReAlign method brings +3.6 **zero-shot improvement** on GenEval , +1.26 on DPGBench, +0.37 on ImgEdit and +0.33 on GEdit.
For installation, usage instructions, and further documentation, please visit BAGEL's original [GitHub repository](https://github.com/bytedance-seed/BAGEL).
## 🧠 Method
Coming soon! Stay tuned~
## 📊 Benchmarks
### 1. Visual Understanding
Remains Unchanged.
### 2. Text-to-Image Generation
We test it on 1024x1024 resolution.
| Model | GenEval ↑ | DPGBench ↑ | WISE ↑ |
| ------------ | --------- | --------- | --------- |
| **BAGEL** | 0.787 | 84.03 | 0.50 |
| **BAGEL-ReAlign** | **0.824** | **85.29** | **0.52** |
### 3. Image Editing
| Model | GEdit-Bench-EN (SC) ↑ | GEdit-Bench-EN (PQ) ↑ | GEdit-Bench-EN (O) ↑ | ImgEdit ↑ |
| ------------- | --------------------- | --------------------- | ------------------- | ------------------ |
| **BAGEL** | 7.96 | 6.64 | 6.94 | 3.38 |
| **BAGEL-NHR** | 8.04 | 6.87 | 7.08 | 3.48 |
| **BAGEL-ReAlign** | **8.24** | 6.87 | **7.27** | **3.75** |
| **FLUX Kontext** | 6.95 | **7.30** | 6.27 | 3.59 |

## License
BAGEL-ReAlign is licensed under the Apache 2.0 license.
## ✍️ Citation
Coming soon!
|
motza0025/blockassist-bc-poisonous_insectivorous_chinchilla_1755057566
|
motza0025
| 2025-08-13T04:22:10Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous insectivorous chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:22:04Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous insectivorous chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bekkuzer/Qwen3-4B-fin_statement_pcss
|
bekkuzer
| 2025-08-13T04:21:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T04:19:35Z
|
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
takanami12/segformerb2_ring
|
takanami12
| 2025-08-13T04:21:45Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T04:20:37Z
|
---
license: apache-2.0
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755058799
|
IvanJAjebu
| 2025-08-13T04:21:28Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:21:02Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maziyaramini/gemma3-1b-fa-sentiment
|
maziyaramini
| 2025-08-13T04:20:57Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:20:20Z
|
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma3-1b-fa-sentiment
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma3-1b-fa-sentiment
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="maziyaramini/gemma3-1b-fa-sentiment", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.