modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Trending-archita-phukan-first-film/archita.phukan.first.film
|
Trending-archita-phukan-first-film
| 2025-08-13T04:18:27Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-08-13T04:18:12Z
|
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AmanPriyanshu/gpt-oss-19.7b-specialized-science-pruned-moe-only-30-experts
|
AmanPriyanshu
| 2025-08-13T04:18:00Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:17:02Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (30 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 30 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~19.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 30 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 30 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 93.8% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 30 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 30 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.7b-specialized-science-pruned-moe-only-30-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-science-pruned-moe-only-30-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-19.7b-specialized-science-pruned-moe-only-30-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-science-pruned-moe-only-30-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need to explain photosynthesis.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis** is the process by which green plants, algae, and some bacteria convert light energy from the sun into chemical energy stored in organic molecules (mainly sugars). It occurs mainly in the chloroplasts of plant cells, specifically within the thylakoid membranes. The overall reaction can be summarized as:
\[
6\,\text{CO}_2 + 6\,\text{H}_2\text{O} + \text{light energy} \;\longrightarrow\; C_6H_{12}O_6 + 6\,\text{O}_2
\]
Below is a step‑by‑step breakdown of the process, highlighting the two main stages—**the light‑dependent reactions** and **the Calvin‑Benson cycle (light‑independent reactions)**—and the key molecules involved.
---
## 1. Light‑Dependent Reactions (Photophosphorylation)
| Step | What Happens | Key Components | Outcome |
|------|--------------|----------------|---------|
| **1. Photon absorption** | Light hits chlorophyll a (and accessory pigments) in Photosystem II (PSII). | PSII, chlorophyll a, carotenoids | Excitation of electrons to a high energy state. |
| **2. Water splitting (photolysis)** | Excited electrons are replaced by electrons from water. | Oxygen-evolving complex (OEC) | Produces 2 O₂ molecules, 4 H⁺, and 4 e⁻. |
| **3. Electron transport chain (ETC)** | Electrons travel from PSII → plastoquinone (PQ) → cytochrome b₆f → plastocyanin → Photosystem I (PSI). | PQ, cytochrome b₆f, plastocyanin | Generates a proton gradient across the thylakoid membrane. |
| **4. ATP synthesis** | Protons flow back through ATP synthase (CF₀/CF₁). | ATP synthase (CF₀/CF₁) | Produces ATP (chemiosmosis). |
| **5. NADPH formation** | Electrons reach PSI, are re‑excited by light, then reduce NADP⁺ to NADPH. | PSI, ferredoxin, ferredoxin‑NADP⁺ reductase (FNR) | Generates NAD
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Explain greenhouse gases, absorption, re-emission, trapping heat, radiative transfer, etc.<|end|><|start|>assistant<|channel|>final<|message|>### The Greenhouse Effect in a Nutshell
The **greenhouse effect** is the natural process that keeps Earth’s surface warm enough to support life. It works by trapping a portion of the Sun’s energy that would otherwise escape back into space. The key players are the **greenhouse gases (GHGs)** in the atmosphere—water vapor, carbon dioxide (CO₂), methane (CH₄), nitrous oxide (N₂O), and a few others. These gases absorb and re‑emit infrared (IR) radiation, effectively acting like a blanket around the planet.
---
## 1. Solar Energy Reaches Earth
| Step | What Happens | Why It Matters |
|------|--------------|----------------|
| **1. Solar radiation** | The Sun emits a spectrum of electromagnetic radiation, peaking in the visible range (~0.4–0.7 µm). | This is the primary energy source for Earth. |
| **2. Atmospheric filtering** | Short‑wave (visible/UV) radiation passes through the atmosphere largely unimpeded. | Most of the incoming solar energy reaches the surface. |
| **3. Surface absorption** | Land, oceans, ice, and vegetation absorb the radiation, warming up. | The surface temperature rises, setting the stage for IR emission. |
---
## 2. The Surface Emits Infrared Radiation
- **Thermal (black‑body) radiation**: Any object with a temperature above absolute zero emits IR radiation. The Earth’s surface, at ~288 K, emits strongly in the 8–15 µm range.
- **Wavelength dependence**: The peak wavelength of emitted radiation is inversely proportional to temperature (Wien’s Law). For Earth, the peak is around 10 µm.
---
## 3. Greenhouse Gases Absorb and Re‑emit IR
| Gas | Absorption Bands | Why It’s Important |
|-----|------------------|--------------------|
| **CO₂** | 4.3 µm (strong), 15 µm (moderate) | Most abundant anthropogenic GHG. |
| **H₂O** | Broad, continuous absorption across 3–20 µm | Most abundant GHG; highly variable. |
| **CH₄** | 7.7 µm | Strong absorber, though
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary material that carries the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses.
---
## 1. Structure
| Feature | Description |
|---------|-------------|
| **Polymers** | Long chains of nucleotides. |
| **Nucleotide composition** | 5‑prime deoxyribose sugar, a phosphate group, and one of four nitrogenous bases: adenine (A), thymine (T), cytosine (C), or guanine (G). |
| **Double helix** | Two complementary strands wound around each other. |
| **Base pairing** | A pairs with T (2 hydrogen bonds), C pairs with G (3 hydrogen bonds). |
| **Antiparallel orientation** | One strand runs 5′→3′, the other 3′→5′. |
| **Major/minor grooves** | Provide binding sites for proteins. |
| **Supercoiling** | Higher‑order winding that compacts DNA in cells. |
| **Chromatin packaging** | DNA wrapped around histone proteins forming nucleosomes, further folded into chromatin fibers. |
---
## 2. Function
| Function | How it works |
|----------|--------------|
| **Genetic information storage** | The sequence of bases encodes genes that specify proteins and regulatory RNAs. |
| **Replication** | Semi‑conservative replication: each strand serves as a template for a new complementary strand, ensuring faithful copying during cell division. |
| **Transcription** | RNA polymerase reads a DNA template strand to synthesize messenger RNA (mRNA). |
| **Translation** | Ribosomes translate mRNA codons into amino acids, building proteins. |
| **Regulation of gene expression** | DNA‑binding proteins (transcription factors, repressors, activators) modulate which genes are transcribed. |
| **Repair and maintenance** | DNA repair enzymes fix damage (e.g., base excision, nucleotide excision, mismatch repair) to preserve integrity. |
| **Evolutionary diversity** | Mutations, recombination, and horizontal gene transfer introduce variation that natural selection can act upon. |
---
### Key Takeaway
DNA’s double‑helical structure, with its specific base‑pairing rules, provides a stable yet flexible template for storing genetic information, guiding
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755058637
|
0xaoyama
| 2025-08-13T04:17:57Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:17:41Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH
|
sunblaze-ucb
| 2025-08-13T04:16:03Z
| 18
| 0
|
transformers
|
[
"transformers",
"safetensors",
"olmo2",
"text-generation",
"conversational",
"en",
"dataset:math",
"arxiv:2505.19590",
"base_model:allenai/OLMo-2-1124-7B-SFT",
"base_model:finetune:allenai/OLMo-2-1124-7B-SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T03:38:17Z
|
---
base_model:
- allenai/OLMo-2-1124-7B-SFT
datasets:
- math
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
---
# OLMo-2-7B-SFT-GRPO-MATH-1EPOCH
This model is a GRPO-fine-tuned version of `allenai/OLMo-2-1124-7B-SFT` trained on the MATH dataset.
This model is associated with the paper [Learning to Reason without External Rewards](https://huggingface.co/papers/2505.19590), which introduces **Intuitor**, a reinforcement learning method that fine-tunes large language models (LLMs) using *self-certainty*—the model’s own internal confidence—as the sole reward. This approach is built on a novel paradigm called **Reinforcement Learning from Internal Feedback (RLIF)**, enabling models to learn without external rewards, gold labels, or verifiers by optimizing intrinsic signals.
---
## Project Page & Code
* **Project Page**: [https://sunblaze-ucb.github.io/Intuitor/](https://sunblaze-ucb.github.io/Intuitor/)
* **GitHub Repository**: [https://github.com/sunblaze-ucb/Intuitor](https://github.com/sunblaze-ucb/Intuitor)
## Usage
You can load and use this model with the `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH"
# It's recommended to load with bfloat16 for OLMo-2 models if supported by your hardware
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
# Example usage:
prompt = "Question: What is 2 + 2?
Answer:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(input_ids, max_new_tokens=50, do_sample=False)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Citation
```bibtex
@article{zhao2025learning,
title={Learning to Reason without External Rewards},
author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal={arXiv preprint arXiv:2505.19590},
year={2025}
}
```
|
sunblaze-ucb/Qwen3-14B-Intuitor-MATH-1EPOCH
|
sunblaze-ucb
| 2025-08-13T04:15:46Z
| 71
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:math",
"arxiv:2505.19590",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T04:11:01Z
|
---
base_model: Qwen/Qwen3-14B
datasets:
- math
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
---
# Qwen3-14B-Intuitor-MATH-1EPOCH
This model is an Intuitor-fine-tuned version of Qwen3-14B trained on the MATH dataset, as presented in the paper [Learning to Reason without External Rewards](https://huggingface.co/papers/2505.19590).
**Intuitor** is a reinforcement learning method that fine-tunes large language models (LLMs) using *self-certainty*—the model’s own internal confidence—as the sole reward. It is built on a novel paradigm called **Reinforcement Learning from Internal Feedback (RLIF)**, which enables LLMs to learn from intrinsic signals without external rewards or labeled data. RLIF offers a scalable and domain-agnostic fine-tuning approach for LLMs in settings where external supervision is expensive or unavailable.
### Links
- 📄 [Paper: Learning to Reason without External Rewards](https://huggingface.co/papers/2505.19590)
- 💻 [GitHub Repository](https://github.com/sunblaze-ucb/Intuitor)
---
## Usage
You can use this model with the Hugging Face `transformers` library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "sunblaze-ucb/Qwen3-14B-Intuitor-MATH-1EPOCH"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
messages = [
{"role": "user", "content": "Solve the following problem: \\(2x + 5 = 11\\). What is x?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
)
response = tokenizer.batch_decode(generated_ids[:, model_inputs.input_ids.shape[1]:], skip_special_tokens=True)[0]
print(response)
```
---
## Citation
```bibtex
@article{zhao2025learning,
title = {Learning to Reason without External Rewards},
author = {Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal = {arXiv preprint arXiv:2505.19590},
year = {2025}
}
```
|
sunblaze-ucb/Qwen2.5-1.5B-GRPO-MATH-1EPOCH
|
sunblaze-ucb
| 2025-08-13T04:15:40Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:math",
"arxiv:2505.19590",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T06:06:45Z
|
---
base_model: Qwen/Qwen2.5-1.5B
datasets:
- math
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-1.5B-GRPO-MATH-1EPOCH
This model is a GRPO-fine-tuned version of Qwen2.5-1.5B trained on the MATH dataset, as presented in the paper [**Learning to Reason without External Rewards**](https://huggingface.co/papers/2505.19590).
**Abstract from the paper:**
Training large language models (LLMs) for complex reasoning via Reinforcement Learning with Verifiable Rewards (RLVR) is effective but limited by reliance on costly, domain-specific supervision. We explore Reinforcement Learning from Internal Feedback (RLIF), a framework that enables LLMs to learn from intrinsic signals without external rewards or labeled data. We propose Intuitor, an RLIF method that uses a model's own confidence, termed self-certainty, as its sole reward signal. Intuitor replaces external rewards in Group Relative Policy Optimization (GRPO) with self-certainty scores, enabling fully unsupervised learning. Experiments demonstrate that Intuitor matches GRPO's performance on mathematical benchmarks while achieving superior generalization to out-of-domain tasks like code generation, without requiring gold solutions or test cases. Our findings show that intrinsic model signals can drive effective learning across domains, offering a scalable alternative to RLVR for autonomous AI systems where verifiable rewards are unavailable.
---
## Overview of Intuitor and RLIF
**Intuitor** is a reinforcement learning method that fine-tunes large language models (LLMs) using *self-certainty*—the model’s own internal confidence—as the sole reward. It is built on a novel paradigm called **Reinforcement Learning from Internal Feedback (RLIF)**.
<div align="center">
<img src="https://raw.githubusercontent.com/sunblaze-ucb/Intuitor/main/figs/rlif.png" alt="RLIF Overview" width="700">
<br>
<em>Overview of Reinforcement Learning from Internal Feedback (RLIF)</em>
</div>
### 🧭 What is RLIF?
**Reinforcement Learning from Internal Feedback (RLIF)** is a training framework where language models learn *without any external rewards, gold labels, or verifiers*. Instead, models improve by optimizing *intrinsic signals*—such as confidence in their own answers—generated entirely from within. RLIF enables scalable and domain-agnostic fine-tuning of LLMs in settings where human feedback or verifiable supervision is expensive or unavailable.
Intuitor instantiates RLIF by using **self-certainty**—a model's confidence measured via KL divergence to uniform—as an intrinsic reward in the GRPO policy optimization algorithm.
<div align="center">
<img src="https://raw.githubusercontent.com/sunblaze-ucb/Intuitor/main/figs/intuitor.png" alt="Intuitor Framework" width="700">
<br>
<em>The Intuitor Framework</em>
</div>
---
## Code and Usage
The official implementation and detailed usage instructions are available on the [**Intuitor GitHub repository**](https://github.com/sunblaze-ucb/Intuitor).
### Sample Usage
You can load and use this model directly with the Hugging Face `transformers` library:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "sunblaze-ucb/Qwen2.5-1.5B-GRPO-MATH-1EPOCH"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # or torch.float16 depending on available hardware
device_map="auto"
)
# Example prompt for a mathematical reasoning task
# The Qwen2.5 model expects a specific chat template for instruction-tuned usage.
prompt = "Question: Simplify (x^2 + 2x + 1) / (x + 1)\
Answer:"
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=100,
do_sample=False, # Set to True for creative outputs
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id
)
decoded_output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(decoded_output)
```
---
## Benchmarks
Intuitor achieves:
- Comparable performance to GRPO on in-domain math reasoning tasks (GSM8K, MATH500)
- Superior generalization to code generation (LiveCodeBench, CRUXEval)
- Improved instruction following, without needing any gold labels or verifiable test suites
For detailed results, see Table 1 in the paper.
<div align="center">
<img src="https://raw.githubusercontent.com/sunblaze-ucb/Intuitor/main/figs/results.png" alt="Benchmark Results" width="700">
<br>
<em>Detailed results are available in Table 1 of the paper.</em>
</div>
---
## Citation
```bibtex
@article{zhao2025learning,
title={Learning to Reason without External Rewards},
author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal={arXiv preprint arXiv:2505.19590},
year={2025}
}
@article{sha2024deepseekmath,
title = {DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models},
author = {Shao, Zhihong and Wang, Peiyi and Zhu, Qihao and Xu, Runxin and Song, Junxiao and Bi, Xiao and … Guo, Daya},
journal = {arXiv preprint arXiv:2402.03300},
year = {2024},
}
```
|
aleebaster/blockassist-bc-sly_eager_boar_1755057451
|
aleebaster
| 2025-08-13T04:15:37Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:15:30Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sunblaze-ucb/Qwen3-14B-GRPO-MATH-1EPOCH
|
sunblaze-ucb
| 2025-08-13T04:15:33Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"reinforcement-learning",
"llm",
"reasoning",
"math",
"conversational",
"en",
"dataset:math",
"arxiv:2505.19590",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T17:44:34Z
|
---
base_model: Qwen/Qwen3-14B
datasets:
- math
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
tags:
- reinforcement-learning
- llm
- reasoning
- math
---
# sunblaze-ucb/Qwen3-14B-GRPO-MATH-1EPOCH
[📄 Paper](https://huggingface.co/papers/2505.19590) | [🌐 Project Page](https://sites.google.com/view/eagle-llm) | [💻 GitHub](https://github.com/sunblaze-ucb/intuitor)
**Description:**
This model is a GRPO-fine-tuned version of Qwen3-14B, specifically trained on the MATH dataset. It is part of the **Intuitor** project, presented in the paper "Learning to Reason without External Rewards".
**Intuitor** is a novel reinforcement learning method that leverages *self-certainty*—the model’s own internal confidence—as its sole reward signal to fine-tune large language models (LLMs). This approach falls under a new framework called **Reinforcement Learning from Internal Feedback (RLIF)**, which enables LLMs to learn effectively from intrinsic signals, circumventing the need for costly external rewards, gold labels, or verifiers. This makes RLIF a scalable and domain-agnostic alternative to traditional RL methods, particularly useful when verifiable rewards are unavailable.
This particular model demonstrates Intuitor's ability to match GRPO's performance on mathematical benchmarks while showing superior generalization to out-of-domain tasks like code generation, all without requiring gold solutions or test cases.
---
## Usage
You can use this model with the `transformers` library for text generation.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "sunblaze-ucb/Qwen3-14B-GRPO-MATH-1EPOCH"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
model.eval()
# Example using a chat-like template, typical for instruction-tuned models like Qwen.
# Adjust prompt format as needed for your specific use case.
messages = [
{"role": "user", "content": "Question: Solve the following equation: $x + 7 = 15$. Show your steps. Answer:"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(generated_text)
```
---
## Citation
If you use Intuitor in your research, please cite our paper:
```bibtex
@article{zhao2025learning,
title = {Learning to Reason without External Rewards},
author = {Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal = {arXiv preprint arXiv:2505.19590},
year = {2025}
}
```
|
AmanPriyanshu/gpt-oss-18.5b-specialized-science-pruned-moe-only-28-experts
|
AmanPriyanshu
| 2025-08-13T04:11:31Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:10:38Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (28 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 28 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~18.5B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 28 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 28 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 87.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 28 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 28 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-science-pruned-moe-only-28-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-science-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-science-pruned-moe-only-28-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-science-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need explain process.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis** is the process by which green plants, algae, and some bacteria convert light energy from the sun into chemical energy stored in organic molecules. In plants, this occurs mainly in the chloroplasts of leaf cells. The overall reaction can be written as:
\[
6\,\text{CO}_2 + 6\,\text{H}_2\text{O} + \text{light energy} \;\longrightarrow\; C_6H_{12}O_6 + 6\,\text{O}_2
\]
In other words, six molecules of carbon dioxide and six molecules of water, using light energy, are turned into one molecule of glucose (a simple sugar) and six molecules of oxygen.
The process is divided into two main stages:
| Stage | Key events | Main products | Where it happens |
|-------|------------|---------------|------------------|
| **Light‑dependent reactions (Photosystems I & II)** | 1. Light photons are absorbed by chlorophyll and accessory pigments. <br>2. Excited electrons travel through the electron transport chain, creating a proton gradient. <br>3. ATP synthase uses the gradient to produce ATP. <br>4. NADP⁺ is reduced to NADPH. | **ATP** (energy currency) and **NADPH** (reducing power) | Thylakoid membranes (inside the chloroplast) |
| **Calvin‑Benson cycle (Light‑independent reactions)** | 1. CO₂ is fixed into 3‑phosphoglycerate (3‑PGA) by the enzyme Rubisco. <br>2. 3‑PGA is phosphorylated by ATP and reduced by NADPH to glyceraldehyde‑3‑phosphate (G3P). <br>3. Some G3P molecules exit the cycle to form glucose and other carbohydrates. <br>4. Remaining G3P is used to regenerate ribulose‑1,5‑bisphosphate (RuBP), the CO₂ acceptor. | **Glucose** (and other sugars), **O₂** (released from water) | Stroma (the fluid surrounding the thylakoids) |
### Detailed Steps
1. **Light Absorption**
- Chlorophyll a and b, carotenoids, and other pigments absorb photons.
- The energy excites electrons in chlorophyll
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect, greenhouse gases, absorption, re-emission, energy balance.<|end|><|start|>assistant<|channel|>final<|message|>### The Greenhouse Effect – What It Is and How It Works
| **Step** | **What Happens** | **Why It Matters** |
|----------|------------------|--------------------|
| 1. Solar radiation reaches Earth | Sun emits a spectrum of electromagnetic waves, mainly visible light and a bit of infrared (IR). | This is the primary energy source for Earth’s climate. |
| 2. Surface absorbs and warms | The Earth’s surface (land, oceans, ice) absorbs most of the incoming solar energy and heats up. | The surface temperature rises to a level that balances the incoming energy. |
| 3. Surface re‑radiates energy | The warmed surface emits energy back into space, but this time it is mostly in the long‑wave (infrared) part of the spectrum. | Infrared radiation is the key to the greenhouse effect. |
| 4. Greenhouse gases absorb IR | Certain gases in the atmosphere—chiefly water vapor (H₂O), carbon dioxide (CO₂), methane (CH₄), nitrous oxide (N₂O), and others—have molecular vibrations that resonate with IR wavelengths. They absorb some of the outgoing IR. | These gases are “transparent” to visible light but “opaque” to IR, so they trap heat. |
| 5. Re‑emission in all directions | After absorbing IR, the greenhouse gas molecules re‑emit radiation in all directions, including back toward the surface. | This back‑radiation warms the surface further, creating a positive feedback loop. |
| 6. Energy balance shifts | The surface now receives more energy (solar + back‑radiation) than it loses to space, so it warms until a new equilibrium is reached. | The planet’s average temperature rises compared to a world without greenhouse gases. |
---
## 1. What Causes the Greenhouse Effect?
| **Natural** | **Anthropogenic (Human‑Made)** |
|-------------|-------------------------------|
| **Water vapor** – the most abundant greenhouse gas, produced by evaporation. | **Carbon dioxide** – from burning fossil fuels, deforestation, cement production. |
| **Methane** – from wetlands, ruminants, rice paddies. | **Methane** – from agriculture, landfills, natural gas leaks. |
| **Nitrous oxide** – from soils, fertilizers.
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary material that carries the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses. Its structure and function can be understood in two complementary parts: the physical architecture of the molecule and the biological roles it performs.
---
## 1. Structural Overview
| Feature | Description | Key Details |
|---------|-------------|-------------|
| **Polymers** | DNA is a polymer of nucleotides. | Each nucleotide consists of a deoxyribose sugar, a phosphate group, and one of four nitrogenous bases. |
| **Bases** | Adenine (A), Thymine (T), Cytosine (C), Guanine (G). | A pairs with T via two hydrogen bonds; C pairs with G via three hydrogen bonds. |
| **Backbone** | 5′‑phosphate → 3′‑deoxyribose → 5′‑phosphate → … | The sugar‑phosphate backbone is antiparallel; one strand runs 5′→3′, the complementary strand 3′→5′. |
| **Double Helix** | Two strands wound around each other. | Classic B‑form helix: ~10.5 base pairs per turn, ~3.4 Å rise per base pair. |
| **Major/Minor Grooves** | Spaces between the two strands. | Provide sites for protein binding and enzymatic activity. |
| **Supercoiling** | DNA can be over‑ or under‑wound. | Maintains compactness in cells and facilitates processes like replication and transcription. |
| **Chromatin Packaging** | DNA wrapped around histone proteins. | Forms nucleosomes → 30‑nm fiber → higher‑order structures. |
---
## 2. Functional Roles
| Function | How DNA Performs It | Biological Significance |
|----------|---------------------|------------------------|
| **Genetic Information Storage** | Sequence of bases encodes genes. | Determines phenotype, cellular function, and inheritance. |
| **Replication** | Semi‑conservative duplication via DNA polymerases. | Produces identical copies for cell division. |
| **Transcription** | RNA polymerase reads DNA template → mRNA. | Converts genetic code into messenger RNA for protein synthesis. |
| **Translation (via mRNA)** | Ribosomes read mRNA cod
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
giovannidemuri/llama3b-llamab8-er-afg-v91-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-08-13T04:11:29Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T03:00:04Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v91-seed2-hx-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v91-seed2-hx-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.0
|
Hfkjc/blockassist-bc-fanged_stinging_sandpiper_1755057870
|
Hfkjc
| 2025-08-13T04:10:43Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged stinging sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:10:21Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged stinging sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755058057
|
IvanJAjebu
| 2025-08-13T04:09:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:08:35Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
datasetsANDmodels/en2it
|
datasetsANDmodels
| 2025-08-13T04:07:43Z
| 0
| 0
| null |
[
"pytorch",
"marian",
"en",
"it",
"base_model:datasetsANDmodels/en2it",
"base_model:finetune:datasetsANDmodels/en2it",
"region:us"
] | null | 2025-08-13T03:10:14Z
|
---
language:
- en
- it
base_model:
- datasetsANDmodels/en2it
---
This model translates text frim English ti Italian
|
DevQuasar/Goekdeniz-Guelmez.Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1-GGUF
|
DevQuasar
| 2025-08-13T04:06:45Z
| 0
| 0
| null |
[
"gguf",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-13T03:41:32Z
|
---
base_model:
- Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Alan123/ChemDFM-sft-qed-logp-mw-37500
|
Alan123
| 2025-08-13T04:06:07Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T04:04:14Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bboeun/food-lora-re2-model
|
bboeun
| 2025-08-13T04:05:53Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T04:05:38Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmanPriyanshu/gpt-oss-17.3b-specialized-science-pruned-moe-only-26-experts
|
AmanPriyanshu
| 2025-08-13T04:05:18Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:04:27Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (26 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 26 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~17.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 26 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 26 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 81.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 26 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 26 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-science-pruned-moe-only-26-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-science-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-science-pruned-moe-only-26-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-science-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in Plants – A Step‑by‑Step Overview**
| **Stage** | **What Happens** | **Key Players** | **Result** |
|-----------|------------------|-----------------|------------|
| 1. **Light Capture** | Sunlight hits chlorophyll‑a and chlorophyll‑b in the thylakoid membranes of chloroplasts. | *Light‑harvesting complexes (LHCs)*, *chlorophyll pigments* | Excited electrons are generated. |
| 2. **Water Splitting (Photolysis)** | The oxygen‑evolving complex (OEC) splits H₂O into O₂, 4 H⁺, and 4 e⁻. | *Oxygen‑evolving complex (Mn‑Ca cluster)* | O₂ released to atmosphere; electrons enter the electron transport chain. |
| 3. **Electron Transport Chain (ETC)** | Excited electrons travel through Photosystem II → plastoquinone → cytochrome b₆f → Photosystem I → plastocyanin → NADP⁺ reductase. | *Photosystem II (PSII)*, *Photosystem I (PSI)*, *plastoquinone*, *cytochrome b₆f*, *plastocyanin*, *NADP⁺ reductase* | Generates a proton gradient (ΔpH) across the thylakoid membrane. |
| 4. **ATP Synthesis (Photophosphorylation)** | Protons flow back through ATP synthase, driving ATP formation. | *ATP synthase (CF₀/CF₁)* | 3 ATP per NADPH produced. |
| 5. **NADPH Production** | Electrons reduce NADP⁺ to NADPH at the end of the ETC. | *NADP⁺ reductase* | NADPH carries reducing power to the Calvin cycle. |
| 6. **Calvin Cycle (Carbon Fixation)** | CO₂ is fixed into 3‑phosphoglycerate (3‑PGA) by Rubisco, then reduced to glyceraldehyde‑3‑phosphate (G3P) using ATP & NADPH. | *Rubisco*, *phosphoglycerate kinase*, *G3P dehydrogenase*, *phosphoglycerate mutase*, *enolase*,
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect, causes, mechanism.<|end|><|start|>assistant<|channel|>final<|message|>### What is the Greenhouse Effect?
The **greenhouse effect** is the natural process that warms the Earth’s surface to a temperature that can support life. It occurs when certain gases in the atmosphere trap heat that would otherwise escape into space. Without this effect, the planet would be about 33 °C (59 °F) colder on average.
---
## 1. What Causes the Greenhouse Effect?
| **Primary Greenhouse Gases** | **Key Properties** | **Sources** |
|------------------------------|---------------------|--------------|
| **Water vapor (H₂O)** | Most abundant, highly variable | Evaporation from oceans, lakes, plants |
| **Carbon dioxide (CO₂)** | Long‑lived, well‑mixed | Fossil‑fuel combustion, deforestation, cement production |
| **Methane (CH₄)** | Strong absorber, short lifetime | Agriculture (livestock, rice paddies), landfills, natural gas leaks |
| **Nitrous oxide (N₂O)** | Long‑lived, potent | Agriculture (fertilizers), industrial processes |
| **Ozone (O₃)** | Absorbs UV and IR | Natural photochemical reactions |
| **Fluorinated gases (HFCs, PFCs, SF₆)** | Extremely potent | Refrigeration, electronics, aerospace |
> **Why these gases?**
> They have molecular structures that allow them to absorb and re‑emit infrared (IR) radiation efficiently. When a gas molecule absorbs IR, it vibrates and then re‑radiates energy in all directions, including back toward the Earth’s surface.
---
## 2. How Does It Work? (Step‑by‑Step)
1. **Solar Radiation Reaches Earth**
- The Sun emits a spectrum of electromagnetic radiation, peaking in the visible range (~0.5 µm).
- About 30 % of this energy is reflected by clouds, ice, and the atmosphere (albedo).
- Roughly 70 % is absorbed by the surface (land, oceans, vegetation).
2. **Surface Heats Up**
- The absorbed solar energy raises the temperature of the Earth’s surface.
- A hot surface emits thermal radiation (infrared, ~10 µm).
3. **Infrared Radiation Interacts with Greenhouse Gases**
-
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary material that carries the genetic instructions for the development, functioning, growth, and reproduction of all living organisms and many viruses. Its structure and function can be understood in two complementary ways: **molecular architecture** and **biological role**.
---
## 1. Molecular Structure
| Feature | Description | Biological Significance |
|---------|-------------|--------------------------|
| **Double‑Helix Backbone** | Two complementary strands wound around each other in a right‑handed helix. Each strand is a sugar‑phosphate chain (deoxyribose + phosphate). | Provides structural stability and protects the genetic code. |
| **Base Pairing** | Nucleotides on opposite strands pair via hydrogen bonds: Adenine (A) ↔ Thymine (T) (2 H‑bonds), Cytosine (C) ↔ Guanine (G) (3 H‑bonds). | Enables accurate replication and transcription; the complementary nature underlies the “copy‑and‑paste” mechanism of DNA. |
| **Nucleotide Composition** | Four bases: A, T, C, G. | Determines the sequence of genes and regulatory elements. |
| **Antiparallel Orientation** | One strand runs 5’→3’, the other 3’→5’. | Essential for polymerase enzymes to read and synthesize DNA in the correct direction. |
| **Major & Minor Grooves** | Surface features that allow protein binding. | Sites for transcription factors, polymerases, and other DNA‑binding proteins. |
| **Chromatin Packaging** | DNA wrapped around histone octamers → nucleosomes → higher‑order fibers → chromosomes. | Compact storage in the nucleus; regulates accessibility for transcription, replication, repair. |
---
## 2. Biological Functions
| Function | Mechanism | Key Players |
|----------|-----------|-------------|
| **Genetic Information Storage** | Sequence of bases encodes codons → amino acids → proteins. | Genes, regulatory sequences (promoters, enhancers). |
| **Replication** | Semi‑conservative duplication: each new duplex contains one parental strand. | DNA polymerases, helicases, primases, ligases. |
| **Transcription** | RNA polymerase reads one DNA strand → mRNA. | Promoters, transcription factors, RNA polymerase II (eukaryotes
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
sitwala/whisper-small-anv-sot
|
sitwala
| 2025-08-13T04:03:54Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:dsfsi-anv/za-african-next-voices",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-13T03:56:38Z
|
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- dsfsi-anv/za-african-next-voices
metrics:
- wer
model-index:
- name: Whisper whisper-small sot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dsfsi/za-next-voices/runs/eqhw2fdn)
# Whisper whisper-small sot
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the dsfsi-anv/za-african-next-voices dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4410
- Wer: 24.8628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 75
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6639 | 0.25 | 250 | 0.6573 | 36.9818 |
| 0.4406 | 0.5 | 500 | 0.5333 | 29.8496 |
| 0.3337 | 0.75 | 750 | 0.4794 | 25.3273 |
| 0.409 | 1.231 | 1000 | 0.4410 | 24.8628 |
### Framework versions
- Transformers 4.52.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.4
|
koloni/blockassist-bc-deadly_graceful_stingray_1755056045
|
koloni
| 2025-08-13T04:01:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:01:16Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-16.1b-specialized-science-pruned-moe-only-24-experts
|
AmanPriyanshu
| 2025-08-13T03:59:15Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:58:28Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (24 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 24 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~16.1B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 24 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 24 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 75.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 24 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 24 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.1b-specialized-science-pruned-moe-only-24-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.1b-specialized-science-pruned-moe-only-24-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.1b-specialized-science-pruned-moe-only-24-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.1b-specialized-science-pruned-moe-only-24-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in Plants – A Step‑by‑Step Overview**
| Stage | What Happens | Key Components | Result |
|-------|--------------|----------------|--------|
| **Light‑Dependent Reactions (Photophosphorylation)** | • Light energy is absorbed by chlorophyll and accessory pigments in the thylakoid membranes of chloroplasts. <br>• Excited electrons travel through the electron‑transport chain, pumping protons into the thylakoid lumen. <br>• The proton gradient powers ATP synthase to produce ATP. <br>• NADP⁺ is reduced to NADPH. | • Chlorophyll a & b, carotenoids, photosystems I & II, plastoquinone, cytochrome b₆f, ATP synthase, ferredoxin, NADP⁺ reductase. | • **ATP** (energy currency) and **NADPH** (reducing power) are generated. |
| **Calvin Cycle (Light‑Independent Reactions)** | • CO₂ is fixed into a 5‑carbon sugar (RuBP) by the enzyme **Rubisco**. <br>• The resulting 6‑carbon compound splits into two 3‑phosphoglycerate (3‑PGA) molecules. <br>• 3‑PGA is phosphorylated by ATP and reduced by NADPH to glyceraldehyde‑3‑phosphate (G3P). <br>• One G3P exits the cycle to contribute to glucose and other carbohydrates; the rest is recycled to regenerate RuBP. | • **Rubisco** (most abundant enzyme), **ATP**, **NADPH**, **RuBP**, **G3P**. | • **Glucose** (and other sugars) are synthesized; CO₂ is released back to the atmosphere. |
### Key Points
1. **Location** – All reactions occur in chloroplasts; light‑dependent reactions in thylakoid membranes, Calvin cycle in the stroma.
2. **Energy Flow** – Light energy → ATP + NADPH → CO₂ fixation → sugars.
3. **Water Splitting** – In photosystem II, water is oxidized, releasing O₂, electrons, and protons.
4. **Regulation** – Light intensity, CO₂ concentration, temperature, and stomatal opening control the overall rate.
5. **Outcome** – Plants convert atmospheric CO₂
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explanation.<|end|><|start|>assistant<|channel|>final<|message|>### What Causes the Greenhouse Effect?
The greenhouse effect is a natural process that warms the Earth’s surface. It is caused by certain gases in the atmosphere—called **greenhouse gases (GHGs)**—that absorb and re‑emit infrared (IR) radiation. The main greenhouse gases are:
| Gas | Typical atmospheric concentration | Primary source |
|-----|-----------------------------------|----------------|
| **Water vapor (H₂O)** | ~0.3 % (variable) | Evaporation, transpiration |
| **Carbon dioxide (CO₂)** | ~0.04 % (≈ 410 ppm in 2024) | Fossil‑fuel combustion, deforestation, industrial processes |
| **Methane (CH₄)** | ~0.00018 % (≈ 1.9 ppm) | Agriculture (especially ruminants), wetlands, fossil‑fuel extraction |
| **Nitrous oxide (N₂O)** | ~0.00003 % (≈ 0.3 ppm) | Agriculture (fertilizers), combustion, industrial processes |
| **Ozone (O₃)** | ~0.00002 % (≈ 0.2 ppm) | Photochemical reactions in the troposphere |
| **CFCs and other halocarbons** | trace | Industrial refrigerants, aerosols (now largely phased out) |
These gases are **transparent to visible sunlight** (short‑wave radiation) but **opaque to the long‑wave infrared radiation** that the Earth emits after absorbing solar energy.
---
### How the Greenhouse Effect Works (Step‑by‑Step)
1. **Solar Radiation Reaches Earth**
- The Sun emits a spectrum of electromagnetic radiation, mostly in the visible and near‑infrared range (≈ 0.4–2.5 µm).
- The atmosphere is largely transparent to this short‑wave radiation, so most of it reaches the Earth’s surface.
2. **Earth Absorbs and Re‑Radiates Energy**
- The surface absorbs the incoming solar energy and warms.
- A warm surface emits thermal radiation (long‑wave IR) with a peak wavelength around 10 µm (according to Wien’s law).
3. **Greenhouse Gases Absorb IR Radiation**
- GHG molecules have vibrational modes that resonate with IR wavelengths.
- When a
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary material that stores and transmits genetic information in living organisms. Its structure and function can be understood in two complementary layers: the **molecular architecture** and the **biological roles** it plays.
---
## 1. Structural Features
| Feature | Description | Biological Significance |
|---------|-------------|------------------------|
| **Double‑Helix Backbone** | Two sugar‑phosphate chains wound around each other in a right‑handed helix. | Provides mechanical stability and protects the genetic code. |
| **Nucleotides** | Each unit consists of a deoxyribose sugar, a phosphate group, and a nitrogenous base. | The four bases (A, T, C, G) encode information. |
| **Base Pairing (Watson–Crick)** | Adenine (A) pairs with Thymine (T) via two hydrogen bonds; Cytosine (C) pairs with Guanine (G) via three hydrogen bonds. | Enables accurate replication and transcription. |
| **Antiparallel Orientation** | One strand runs 5′→3′, the other 3′→5′. | Essential for polymerase enzymes to read and synthesize DNA. |
| **Major & Minor Grooves** | Structural channels that allow protein binding. | Sites for transcription factors, polymerases, and repair enzymes. |
| **Chromatin Packaging** | DNA wraps around histone octamers to form nucleosomes → 30‑nm fiber → higher‑order structures. | Condensates DNA for mitosis, regulates gene accessibility. |
| **Topological Constraints** | Supercoiling, knotting, and catenation. | Managed by topoisomerases to facilitate replication and transcription. |
---
## 2. Functional Roles
| Function | Mechanism | Key Players |
|----------|-----------|------------|
| **Genetic Information Storage** | Sequence of bases encodes proteins, RNAs, regulatory elements. | Genes, introns, exons, regulatory sequences. |
| **Replication** | Semi‑conservative duplication; each new duplex contains one parental strand. | DNA polymerases (α, δ, ε), helicases, primase, ligase. |
| **Transcription** | RNA polymerase reads DNA template to synthesize mRNA. | RNA polymerase II (eukaryotes),
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
kujirawhale/blockassist-bc-freckled_rapid_iguana_1755057339
|
kujirawhale
| 2025-08-13T03:57:10Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled rapid iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:56:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled rapid iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jack-Payne1/qwen_2.5_7b-phoenix_T4_full_seed3
|
Jack-Payne1
| 2025-08-13T03:56:56Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T03:53:55Z
|
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jack-Payne1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pawin205/REMORX-UREX-SL16k
|
pawin205
| 2025-08-13T03:55:49Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:pawin205/Qwen-7B-Review-ICLR-GRPO-UR",
"base_model:finetune:pawin205/Qwen-7B-Review-ICLR-GRPO-UR",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T18:06:11Z
|
---
base_model: pawin205/Qwen-7B-Review-ICLR-GRPO-UR
library_name: transformers
model_name: REMORX-UREX-SL16k
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for REMORX-UREX-SL16k
This model is a fine-tuned version of [pawin205/Qwen-7B-Review-ICLR-GRPO-UR](https://huggingface.co/pawin205/Qwen-7B-Review-ICLR-GRPO-UR).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pawin205/REMORX-UREX-SL16k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pawin-taechoyotin-university-of-colorado-boulder/REMORX/runs/7mctiekm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755055669
|
mang3dd
| 2025-08-13T03:55:25Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:55:22Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755057257
|
0xaoyama
| 2025-08-13T03:54:50Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:54:42Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755056094
|
aleebaster
| 2025-08-13T03:53:27Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:53:20Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iamzac/blockassist-bc-chattering_strong_butterfly_1755056858
|
iamzac
| 2025-08-13T03:48:43Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering strong butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:48:34Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering strong butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yuan571/gemma-3-finetune-0813
|
yuan571
| 2025-08-13T03:48:34Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-13T03:33:42Z
|
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** yuan571
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmanPriyanshu/gpt-oss-13.7b-specialized-science-pruned-moe-only-20-experts
|
AmanPriyanshu
| 2025-08-13T03:48:07Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:47:28Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (20 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 20 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~13.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 20 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 20 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 62.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 20 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 20 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.7b-specialized-science-pruned-moe-only-20-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.7b-specialized-science-pruned-moe-only-20-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.7b-specialized-science-pruned-moe-only-20-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.7b-specialized-science-pruned-moe-only-20-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in Plants – A Step‑by‑Step Overview**
| **Stage** | **What Happens** | **Key Players** | **Result** |
|-----------|-------------------|-----------------|-----------|
| 1. **Light Capture** | Sunlight hits chlorophyll‑rich cells (chloroplasts). | • Chlorophyll‑a & b (pigments) | • Excited electrons |
| 2. **Water Splitting (Photolysis)** | Light energy splits H₂O → 2 H⁺ + 2 e⁻ + ½ O₂. | • Photosystem II (PSII) | • Oxygen released to air |
| 3. **Electron Transport Chain (ETC)** | Electrons travel through plastoquinone, cytochrome b₆₀, and Photosystem I (PSI). | • ATP synthase, NADP reductase | • Generates ATP & NADPH |
| 4. **Carbon Fixation (Calvin Cycle)** | CO₂ + 3 ATP + 2 NADPH → G3P (glyceraldehyde‑3‑phosphate). | • CO₂‑fixing enzyme Rubisco | • 3‑C sugars (G3P) |
| 5. **Sugar Utilization** | G3P → glucose, fructose, sucrose, starch, cellulose. | • Hexokinase, aldolase, etc. | • Energy storage & structural material |
| 6. **Oxygen Release** | O₂ produced in water‑splitting step is expelled. | • Gas exchange via stomata | • Supports aerobic life |
### Key Points
1. **Energy Flow**
* Light energy → chlorophyll → electron transport → ATP + NADPH → CO₂ fixation.*
2. **Dual Role of Chloroplasts**
* Light reactions (outer thylakoid membranes) produce ATP/NADPH and O₂.*
* Dark reactions (stroma) use ATP/NADPH to build sugars.*
3. **Regulation**
* Stomatal opening controls CO₂ intake and water loss.*
* Light intensity, CO₂ concentration, and temperature influence rates.*
4. **End Products**
* Primary: glucose (C₆H₁₂O₆) and oxygen.*
* Secondary: starch (storage),
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes, greenhouse gases, absorption, re-emission, energy balance.<|end|><|start|>assistant<|channel|>final<|message|>**What causes the greenhouse effect?**
The greenhouse effect is a natural process that keeps Earth’s surface warm enough for life. It is caused by certain gases in the atmosphere—called *greenhouse gases* (GHGs)—that can absorb and re‑emit infrared (IR) radiation. The main GHGs are:
| Gas | Natural source | Human‑induced source | Typical atmospheric concentration |
|-----|----------------|------------------------|-------------------------------------|
| **CO₂** | Volcanoes, respiration, decomposition | Fossil‑fuel combustion, deforestation | ~400 ppm (≈ 0.04 %) |
| **CH₄** | Wetlands, natural seepage | Agriculture (livestock, rice paddies), natural gas leaks | ~1.8 ppm |
| **N₂O₄** | Lightning, soil microbes | Industrial processes, fertilizers | ~0.3 ppm |
| **H₂O** | Evaporation | None (water vapor is a feedback, not a direct human source) | 0–4 % (varies with temperature) |
| **O₃** | Photochemical reactions | Industrial emissions, vehicle exhaust | ~0.05 ppm |
| **CFCs, HFCs, PFCs** | None | Refrigerants, foam blowing agents | Trace amounts |
The key point is that **greenhouse gases are transparent to visible sunlight** (the energy that reaches Earth from the Sun) but **opaque to the long‑wave infrared radiation** that the Earth emits after absorbing that sunlight.
---
## How the greenhouse effect works
1. **Solar radiation reaches Earth**
The Sun emits a spectrum that peaks in the visible and near‑infrared. The atmosphere is largely transparent to this radiation, so it penetrates to the surface and is absorbed by the land, oceans, and clouds.
2. **Earth re‑radiates energy as thermal IR**
The warmed surface emits energy mainly in the long‑wave (infrared) part of the spectrum (≈ 5–50 µm). This radiation is what the atmosphere must deal with.
3. **Greenhouse gases absorb IR**
Molecules of CO₂, CH₄, N₂O₄, H₂O, etc., have vibrational modes that resonate with IR wavelengths. When an
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (deoxyribonucleic acid)** is the hereditary material that carries the genetic instructions for building and regulating all living organisms. Its structure and function can be broken down into several key components:
| Feature | Description | Functional Role |
|--------|-------------|-----------------|
| **Double‑helix backbone** | Two sugar‑phosphate chains wound around each other in a right‑handed helix. | Provides structural stability and protects the genetic information. |
| **Nucleobases** | Four nitrogenous bases: adenine (A), thymine (T), cytosine (C), guanine (G). | Encode the genetic code; A pairs with T, C pairs with G. |
| **Base pairing (hydrogen bonds)** | A–T (2 H‑bonds), C–G (3 H‑bonds). | Enables complementary base pairing, essential for accurate replication and transcription. |
| **Major & minor grooves** | The helical twist creates two grooves that proteins can access. | Allows DNA‑binding proteins (e.g., transcription factors, polymerases) to recognize specific sequences. |
| **Chromatin packaging** | DNA wrapped around histone proteins to form nucleosomes, further folded into chromatin. | Compacts DNA to fit in the nucleus and regulates gene accessibility. |
| **Genomic organization** | Segmented into genes, regulatory elements, introns, exons, and non‑coding regions. | Determines which proteins are produced and when. |
### Core Functions
1. **Genetic Information Storage**
- The sequence of bases constitutes a code that specifies amino acids, RNA molecules, and regulatory signals.
2. **Replication**
- During cell division, DNA polymerases synthesize a complementary strand for each template strand, producing two identical DNA molecules.
- Fidelity is maintained by proofreading and mismatch repair mechanisms.
3. **Transcription**
- RNA polymerase reads a DNA template to synthesize messenger RNA (mRNA).
- The mRNA sequence is then translated into proteins.
4. **Regulation of Gene Expression**
- DNA‑binding proteins (transcription factors, repressors, activators) interact with promoter and enhancer regions to turn genes on or off.
- Epigenetic modifications (DNA methylation, histone acetylation) alter chromatin structure and influence accessibility.
5. **Repair & Maintenance**
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mradermacher/lynx-olmo-3b-base-GGUF
|
mradermacher
| 2025-08-13T03:47:48Z
| 21
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:motionlabs/lynx-olmo-3b-base",
"base_model:quantized:motionlabs/lynx-olmo-3b-base",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T06:13:00Z
|
---
base_model: motionlabs/lynx-olmo-3b-base
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/motionlabs/lynx-olmo-3b-base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#lynx-olmo-3b-base-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/lynx-olmo-3b-base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lynx-olmo-3b-base-GGUF/resolve/main/lynx-olmo-3b-base.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KhanhLT26/donut_v1
|
KhanhLT26
| 2025-08-13T03:47:47Z
| 82
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-09T16:06:15Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sakuhara/blockassist-bc-marine_clawed_komodo_1755056635
|
sakuhara
| 2025-08-13T03:45:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine clawed komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:45:19Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine clawed komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-13.1b-specialized-science-pruned-moe-only-19-experts
|
AmanPriyanshu
| 2025-08-13T03:45:26Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:44:48Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (19 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~13.1B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 19 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 19 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 59.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.1b-specialized-science-pruned-moe-only-19-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-science-pruned-moe-only-19-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.1b-specialized-science-pruned-moe-only-19-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-science-pruned-moe-only-19-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in Plants – A Step‑by‑Step Overview**
| **Stage** | **What Happens** | **Key Players** | **Result** |
|---|---|---|---|
| 1. **Light Capture** | Sunlight hits chlorophyll‑containing pigments in the thylakoid membranes of chloroplasts. | • Chlorophyll a & b<br>• Carotenoids (protective pigments) | • Excited electrons are released from chlorophyll |
| 2. **Electron Transport Chain (ETC)** | Excited electrons travel through a series of proteins (photosystem II → cytochrome b₆₀ → photosystem I → cytochrome a₁₀). | • Photosystem II (PSII)<br>• Photosystem I (PSI)<br>• Cytochrome complexes | • Energy is stored in ATP (via proton gradient) and NADPH (via electron reduction) |
| 3. **Carbon Fixation (Calvin Cycle)** | CO₂ from the air is incorporated into sugars using ATP and NADPH. | • Ribulose‑1‑phosphate carboxylase‑oxygenase (RuBisCO)<br>• Other enzymes (e.g., GAPDH, PGK) | • Glyceraldehyde‑3‑phosphate → 3‑phosphoglycerate → 1‑bromopentose → glucose & other carbohydrates |
| 4. **Oxygen Release** | Water molecules split (photolysis) in PSII, releasing O₂ as a by‑product. | • Water‑splitting enzyme (H₂O → 2H⁺ + 4e⁻ + O₂) | • O₂ diffuses out through stomata |
### Quick‑Line Summary
1. **Light energy** excites chlorophyll → electrons are freed.
2. Electrons travel through the ETC, generating **ATP** and **NADPH**.
3. **ATP + NADPH** power the Calvin cycle, fixing atmospheric CO₂ into glucose.
4. **Water** is split to supply electrons, releasing **oxygen**.
**Result:** Plants convert light, CO₂, and H₂O into glucose (energy source) and O₂ (life‑supporting gas).<|return|>
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect, causes, mechanism.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What is the Greenhouse Effect?
The **greenhouse effect** is a natural process that keeps Earth’s surface warm enough to support life. It works by trapping a portion of the Sun’s energy that would otherwise escape into space, much like a greenhouse traps heat.
---
## 2. What Causes the Greenhouse Effect?
| **Primary Cause** | **How it Works** |
|-------------------|-------------------|
| **Atmospheric Greenhouse Gases** | Gases that absorb and re‑radiate infrared (IR) radiation emitted by the Earth’s surface. |
| **Key Gases** | • **Water vapor (H₂O)** – ~70 % of the natural greenhouse effect. <br>• **Carbon dioxide (CO₂)** – ~20 % (human‑produced ~0.04 % of the atmosphere). <br>• **Methane (CH₄)** – ~1 % (human‑produced ~0.0002 %). <br>• **Nitrous oxide (N₂O)**, **ozone (O₃)**, and **CFCs** – smaller but potent contributors. |
| **Human Activities** | • Burning fossil fuels (coal, oil, gas). <br>• Deforection and land‑use changes. <br>• Agriculture (especially livestock, rice paddies). <br>• Industrial processes (cement, ammonia). |
| **Resulting Increase** | Since the pre‑industrial era (~1850), atmospheric CO₂ has risen from ~280 ppm to ~420 ppm, and the total greenhouse‑gas forcing has increased by ~1.8 W m⁻². |
---
## 3. How Does the Greenhouse Effect Work? (Step‑by‑Step)
1. **Solar Radiation Arranges**
* The Sun emits mainly visible and near‑infrared (NIR) light.
* About 30 % of this is reflected by clouds, ice, and the Earth’s surface.
* Roughly 70 % reaches the surface and is absorbed, warming the land, oceans, and atmosphere.
2. **Earth’s Surface Re‑Radiates**
* The warmed surface emits thermal radiation, mostly in the **long‑wave (infra‑red)** part of the spectrum (≈ 5–50
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary genetic material that carries the instructions for building and regulating all living organisms. Its structure and function can be broken down into several key components:
| **Structure** | **Function** |
|---|---|
| **Double‑Helix Backbone** – Two complementary strands of nucleotides wound around each other in a right‑handed helix. The sugar‑phosphate backbone provides structural stability and protects the genetic code. | **Genetic Information Storage** – The sequence of nucleotides encodes the instructions for proteins and RNA molecules. |
| **Nucleotides** – Four bases (Adenine, Thymine, Cytosine, Guanine) pair via hydrogen bonds (A‑T, C‑G). | **Protein Synthesis** – During transcription, RNA polymerase reads the DNA sequence and produces messenger RNA (mRNA), which is translated into proteins. |
| **Major & Minor Groes** – The helical twist creates grooves that allow proteins (e.g., transcription factors, nucleases) to access the DNA. | **Regulation of Gene Expression** – DNA‑binding proteins recognize specific sequences in the grooves to activate or repress genes. |
| **Chromatin Packaging** – In eukaryotes, DNA wraps around histone proteins to form nucleosomes, which further coil into higher‑order structures (chromosomes). | **Cell Division & Chromosome Segregation** – Proper packaging ensures accurate duplication and segregation of genetic material during mitosis and meiosis. |
| **Replication Origin Sites** – Specific sequences (e.g., AT‑rich “origin of replication”) where DNA helicase unwinds the strands. | **DNA Replication** – The double helix is copied during cell division, ensuring each daughter cell receives a complete genome. |
| **Repair Mechanisms** – Enzymes (e.g., DNA polymerase, ligase, helicase) correct mismatches, breaks, and mutations. | **Genome Integrity** – Maintains fidelity of genetic information across generations. |
**Key Functional Roles**
1. **Encoding Genetic Code** – Determines amino acid sequences of proteins, regulatory RNAs, and other biomolecules.
2. **Regulation of Gene Activity** – DNA sequences (promoters, enhancers, silencers) control when and where genes are expressed.
3. **Inheritance** – Passed from parent to offspring,
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
7h3-R3v3n4n7/pentest-agent-lora
|
7h3-R3v3n4n7
| 2025-08-13T03:42:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T02:25:43Z
|
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 7h3-R3v3n4n7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buffaX/ppo-LunarLander-v2
|
buffaX
| 2025-08-13T03:42:04Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-13T03:41:43Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.65 +/- 22.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
peekayitachi/BiasCheck-RoBERTa
|
peekayitachi
| 2025-08-13T03:41:37Z
| 19
| 1
| null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-04-27T07:27:06Z
|
# BiasCheck-RoBERTa
## Model Description
The **BiasCheck-RoBERTa** model is a political bias detection model based on the **RoBERTa** architecture. It classifies news articles into three political bias categories: **Left**, **Center**, and **Right**. This model was trained on a curated dataset of articles from allsides available on kaggle, and it utilizes the **RoBERTa-base** model as the base architecture for text classification. The model provides a reliable way to identify political bias in news articles, helping users to assess the bias of the content they consume.
## Base Model
The **BiasCheck-RoBERTa** model is based on the **RoBERTa-base** architecture, a robust transformer-based model that has been pre-trained on vast amounts of text data.
## License
This model is licensed under the **MIT License**.
## Training Data
The model was trained on the following datasets:
- [AllSides Ratings of Bias in Electronic Media](https://www.kaggle.com/datasets/supratimhaldar/allsides-ratings-of-bias-in-electronic-media)
- [Article Bias Prediction Dataset](https://github.com/ramybaly/Article-Bias-Prediction)
## Metrics
The model was evaluated using several performance metrics. Below are the key metrics:
- **Accuracy**: 0.913
- **Precision**: 0.914
- **Recall**: 0.913
- **F1-Score**: 0.913
- **Log Loss**: 0.233
- **AUC-ROC**: 0.986
## Carbon Emission
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO$_2$eq/kWh. A cumulative of 24 hours of computation was performed on hardware of type RTX 3080 (TDP of 320W).
Total emissions are estimated to be 3.32 kgCO2eq of which 0 percents were directly offset.
**3.32 kgCO2eq is equivalent to**
- 13.4 Km driven by an average ICE car
- 1.66 Kgs of coal burned
- 0.06 Tree seedlings sequesting carbon for 10 years
## Installation
To use this model, you will need to install the following dependencies:
make a requirements.txt file and add the following dependencies <br />
torch>=2.0 <br />
transformers>=4.35 <br />
datasets <br />
scikit-learn <br />
matplotlib <br />
numpy <br />
pandas <br />
nltk<br />
spacy<br />
pydantic<br />
fastapi<br />
uvicorn<br />
bert-score<br />
contractions<br />
torch<br />
transformers<br />
datasets<br />
scikit-learn<br />
pandas<br />
numpy<br />
matplotlib<br />
seaborn<br />
bert_score<br />
```bash
pip install -r requirements.txt
```
|
zuruyu/blockassist-bc-endangered_pesty_chinchilla_1755056419
|
zuruyu
| 2025-08-13T03:41:33Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered pesty chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:41:14Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered pesty chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts
|
AmanPriyanshu
| 2025-08-13T03:40:13Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:39:33Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (17 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 17 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~12.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 17 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 17 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 53.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 17 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 17 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis – the “making of food” in plants**
| Step | What happens | Key molecules & enzymes | Where it occurs |
|------|--------------|------------------------|----------------|
| 1. **Light absorption** | Chlorophyll‑a, chlorophyll‑b, and accessory pigments in the thylakoid membranes absorb photons (400–700 nm). | • Chlorophyll a & b | Thylakoid membranes of chloroplasts |
| 2. **Excitation of electrons** | Energy excites electrons in chlorophyll‑a → they are passed to the *electron transport chain* (ETC). | • Photosystem II (PSII) → plastoquinone → Photosystem I (PSI) | Thylakoid membrane |
| 3. **Water splitting (photolysis)** | PSII uses light energy to split H₂O → 2 H⁺ + 2 e⁻ + O₂. | • Oxygen‑evolving complex (OEC) | Thylakoid membrane |
| 4. **ATP synthesis** | Electrons travel through ETC, pumping protons into the thylakoid lumen → proton‑gradient drives ATP synthase → ATP. | • ATP synthase | Thylakoid membrane |
| 5. **NADPH formation** | Electrons reduce NADP⁺ to NADPH (via PSI). | • PSI, cytochrome b₆₁ | Thylakoid membrane |
| 6. **Carbon fixation (Calvin cycle)** | In the stroma, CO₂ is fixed into 3‑phosphoglycerate (3‑PG) by the enzyme Rubisco. 3‑PG is then recycled to produce sugars (glucose, fructose). | • Rubisco, GAPDH, PK, etc. | Stroma of chloroplasts |
| 7. **Sugar export** | Glucose and other sugars are transported out of the chloroplast to the cytosol and then to other tissues. | • Hexokinase, sucrose transporters | Cytosol, plasmodesmata |
**Key points**
1. **Energy source** – Light energy drives the entire process.
2. **Electron transport chain** – Generates ATP and NADPH, the “energy currency” for the Calvin cycle.
3. **Water → Oxygen** – Photosynthesis releases O₂
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes, greenhouse gases, absorption of IR, trapping heat, radiative balance.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What causes the greenhouse effect?
The “greenhouse effect” (or “greenhouse‑gas effect”) is the natural warming of the Earth’s surface that results from the presence of certain gases in the atmosphere that can absorb and re‑emit infrared (IR) radiation. The effect is named after the way a greenhouse works: a glass roof lets sunlight (visible light) in, but traps the heat that the plants and the roof emit back as IR radiation.
The main greenhouse gases (GHGs) are:
| Gas | Natural concentration (ppm) | Typical source | Radiative effect |
|------|------------------------------|----------------|----------------|
| Water vapor | 2000–3000 | Evaporation, weather | Strong, but variable |
| Carbon dioxide (CO₂) | 400 | Fossil fuels, deforestation, industry | Strong, long‑term |
| Methane (CH₄) | 1.8 | Agriculture, natural gas, wetlands | Very potent, short‑term |
| Nitrous oxide (N₂O) | 0.3 | Agriculture, livestock, industrial | Potent, short‑term |
| Ozone (O₃) | 0.3 | Photochemical reactions | Strong, short‑term |
| Aerosols (e.g., sulfate) | – | Volcanic, industrial | Generally cooling (anti‑greenhouse) |
The atmosphere is transparent to most of the Sun’s visible and ultraviolet (UV) radiation, so that radiation can reach the surface. Once the surface is heated, it emits long‑wave IR radiation. Greenhouse gases absorb a fraction of that IR and then re‑emit it in all directions. A fraction of the re‑emitted IR escapes to space, but a larger fraction is directed back toward the surface, keeping the surface warmer than it would be if the atmosphere were transparent to IR.
### 2. How does it work? (Radiative transfer)
1. **Solar radiation reaches the Earth**
- The Sun emits a spectrum that peaks in the visible/near‑UV.
- The atmosphere is largely transparent to this part of the spectrum, so it reaches the surface.
2. **Surface absorbs and re‑emits**
- The surface absorbs the incoming solar energy and warms.
- A warm surface emits
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double helix, nucleotides, base pairs, sugar-phosphate backbone, major bases, base pairing rules, major vs minor, major groove, minor groove, hydrogen bonding, base composition, sequence, etc. Function: genetic information storage, replication, transcription, regulation, coding for proteins, etc. Also mention roles in evolution, regulation, epigenetics. Provide details.<|end|><|start|>assistant<|channel|>final<|message|>**DNA – the Double‑Helix Blueprint of Life**
---
## 1. The Physical Structure of DNA
| Feature | What it Is | Why It Matters |
|--------|------------|----------------|
| **Double‑helix** | Two strands of nucleic acid that wind around each other like a ladder. | Gives stability, protects the genetic code, and allows proteins to read the sequence. |
| **Back‑to‑back sugar‑phosphate backbone** | 5‑carbon de‑oxyribose sugars linked by phosphates. | Provides a rigid, uniform scaffold that keeps the bases in the right orientation. |
| **Nucleobases** | Four nitrogenous bases: Adenine (A), Thymine (T), Cytosine (C), Guanine (G). | Carry the “letters” of the genetic alphabet. |
| **Base pairing** | A pairs with T, C pairs with G via hydrogen bonds (2 H‑bonds for A‑T, 3 for C‑G). | Ensures complementary strands can bind tightly and be read accurately. |
| **Major vs. Minor groove** | The A‑T and C‑G pairs create a wide, deep major groove and a shallow, narrow minor groove. | The major groove is the site where proteins (e.g., transcription factors, DNA‑binding proteins) dock. |
| **Sequence direction** | 5′ → 3′ (from 5‑phosphate to 3‑phosphate). | Sets the “handedness” of the helix and the direction of replication/transcription. |
| **Length & flexibility** | A typical eukaryotic chromosome is ~10⁶–10⁹ base pairs; DNA can be a few nanometers to centimeters long. | Allows packing of vast amounts of information in a single cell. |
### Visualizing the Ladder
```
5′-G‑C‑A‑T‑G‑C‑...‑3′
| | | | |
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
BinBashir/small_Naija_BERT_on_jumia_dataset
|
BinBashir
| 2025-08-13T03:37:54Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T03:16:39Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ElToro2602/blockassist-bc-raging_prehistoric_chameleon_1755056057
|
ElToro2602
| 2025-08-13T03:35:13Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging prehistoric chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:35:04Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging prehistoric chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAI-Research/GTC-2-Medium
|
TAI-Research
| 2025-08-13T03:35:09Z
| 0
| 1
| null |
[
"text-generation",
"en",
"base_model:TAI-Research/GTC-2-Medium",
"base_model:finetune:TAI-Research/GTC-2-Medium",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-13T00:22:46Z
|
---
license: mit
language:
- en
base_model:
- TAI-Research/GTC-2-Medium
pipeline_tag: text-generation
---
|
Nerva1228/xiaozi
|
Nerva1228
| 2025-08-13T03:34:31Z
| 2
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T03:16:57Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xiaozi
---
# Xiaozi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xiaozi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "xiaozi",
"lora_weights": "https://huggingface.co/Nerva1228/xiaozi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/xiaozi', weight_name='lora.safetensors')
image = pipeline('xiaozi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/xiaozi/discussions) to add images that show off what you’ve made with this LoRA.
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755054443
|
indoempatnol
| 2025-08-13T03:32:38Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:32:35Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-9.6b-specialized-science-pruned-moe-only-13-experts
|
AmanPriyanshu
| 2025-08-13T03:30:58Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:30:30Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (13 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 13 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 13 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 40.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-science-pruned-moe-only-13-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-science-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-science-pruned-moe-only-13-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-science-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need to explain steps, light absorption, electron transfer, Calvin cycle, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis – the “green” chemistry of life**
Plants turn sunlight into chemical energy to grow, reproduce, and maintain their ecosystems. The process is a coordinated dance of light‑energy conversion, electron transfer, and carbon‑fixing chemistry that takes only a few seconds in a leaf but is essential for the biosphere.
---
## 1. Light‑energy capture: the chlorophyll antenna
| **Step** | **What happens** | **Why important** |
|---|---|---|---|
| **Photon absorption** | Chlorophyll‑a and chlorophyll‑b (plus accessory pigments) absorb photons in the blue‑green part of the visible spectrum (≈400–650 nm). | These pigments act as “antennae” that can harvest the energy of photons. |
| **Excite formation** | The absorbed photon promotes an electron from the *ground state* (S₀) to an *excited state* (S₁) of the chlorophyll molecule. | The excited state is short‑lifetime (≈10⁻¹�⁴ s) and must be converted to a usable chemical form. |
| **Charge‑separation** | The excited chlorophyll rapidly undergoes *charge‑separation* (a “photochemical” step). An electron is promoted to the *conhole* (S₁ → S₁) and a hole (S₁ → S₁) is left behind. | This creates a “hot” electron that can be used for a chemical reaction. |
---
## 2. Electron transfer to the *electron‑transport chain*
| **Step** | **What happens** | **Key enzymes** |
|---|---|---|---|
| **Electron transfer** | The hot electron is passed to the *electron‑transport chain* (ETC) embedded in the thromosome (a protein complex). | The ETC is a series of redox reactions that ultimately produce a proton gradient and ATP. |
| **ATP synthesis** | The ETC pumps protons across the *inner* and *outer* membranes of the mitochondoid‑like thromosome, creating a proton motive force (ΔpH ≈ 0.5 mV). | The proton gradient is used to drive the *hydrolysis* of ATP‑generating reactions. |
| **
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes, greenhouse gases, absorption, re-emission, warming.<|end|><|start|>assistant<|channel|>final<|message|>### What is the “greenhouse effect” (or greenhouse effect)?
The greenhouse effect is the warming of the Earth’s surface and atmosphere caused by the trapping of heat that is emitted from the Earth’s surface and from the Sun. It is the same as the greenhouse effect in a laboratory: a container of a gas that absorbs and re‑emits infrared radiation, keeping the temperature higher than it would be if the gas were not present.
---
## 1. What gases are responsible?
The main greenhouse gases (GHGs) are:
| Gas | Source | Effect on the atmosphere |
|------|--------|---------------------------|
| **CO₂** (carbon dioxide) | Fossile‑fuel combustion, methane oxidation, volcanic CO₂ | Strong absorber of IR, 1‑2 % of the Earth’s energy budget |
| **CH₄₄** (methane) | Natural gas, bi‑methane, fossil‑fuel combustion | Strong IR absorber, 1‑2 % of the energy budget |
| **N₂O₄** (nitrous oxide) | Fossile‑fuel combustion, industrial processes | Strong IR absorber, 0.5 % of the energy budget |
| **H₂O** (water vapor) | Evaporised water from the surface | Very strong IR absorber, but it is a natural feedback |
| **C₄H₁₀** (C‑clane) | Fossile‑fuel combustion, industrial processes | Strong IR absorber, 0.5 % of the energy budget |
| **C₂O₂** (CO₂) | Fossile‑fuel combustion, methane oxidation | Strong IR absorber, 1‑2 % of the energy budget |
The other gases (e.g., N₂, O₂, Ar, He) are essentially transparent to IR radiation and do not contribute to the greenhouse effect.
---
## 2. How does the greenhouse effect work?
1. **Solar radiation reaches the Earth**
The Sun emits a broad spectrum of radiation. About 30 % of the solar energy is absorbed by the Earth’s surface and atmosphere. The rest is reflected or transmitted into space.
2. **The Earth emits thermal radiation**
The Earth is in thermal equilibrium: the absorbed solar energy is re‑emitted as a black‑body spectrum (mostly in
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double helix, base pairs, sugar, phosphate, nitrogen, etc. Function: genetic code, replication, transcription, translation, regulation, etc.<|end|><|start|>assistant<|channel|>final<|message|>**DNA: the double‑helix of life**
---
## 1. The “structure” of DNA
| **Component** | **What it does** | **Key features** |
|----------------|----------------|----------------|
| **Double helix** | Two strands wind around each way | Right‑handed (R) and left‑handed (L) helices |
| **Base pairs** | 10 pairs per 3.4 nm (≈ 10 nm per 100 bp) | 10 bp = 3.4 nm; 10 bp = 10 nm (≈ 10 nm per 100 bp) |
| **Sugar (deph‑phosph‑deph‑phosph)** | Provides the backbone | 5 × 10⁻¹�⁰ kJ/mol of bond energy |
| **Phosphorus (P)** | Provides the backbone | 5 × 10⁻¹��⁰ kJ/mol of bond energy |
| **Nitrogen (N)** | Provides the backbone | 5 × 10⁻¹��⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹�⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹�⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹�⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹�⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹⁰ kJ/mol of bond energy |
| **Hydrogen (H)** | Provides the backbone | 5 × 10⁻¹⁰ kJ/mol
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755054617
|
Sayemahsjn
| 2025-08-13T03:29:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:29:13Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-9.0b-specialized-science-pruned-moe-only-12-experts
|
AmanPriyanshu
| 2025-08-13T03:28:51Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:28:23Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (12 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 12 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 12 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 12 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 37.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 12 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 12 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-science-pruned-moe-only-12-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-science-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-science-pruned-moe-only-12-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-science-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need to explain photosynthesis.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis – the life‑fueling process of plants**
---
### 1. What is photosynthesis?
Photosynthesis is the conversion of light energy into chemical energy that plants use to grow, reproduce and maintain their life. In short, plants “burn” sunlight to make a new molecule of glucose (the sweet sugar that fuels life).
---
### 2. The key parts of the process
| **Subcellular organ** | **Molecule** | **Function** |
|---|---|---|
| **Chloroplast** | – | The site where the reaction happens |
| **Stromule** | – | The tiny “poore” that lets light reach the chlorophyll |
| **Chlorophyll** (main pigment) | – | Abs absorbs light, especially red‑green wavelengths |
| **Car‑t‑p** | – | The enzyme that turns the light‑energy into chemical energy |
| **Glucose** | – | The product that fuels the plant’s metabolism |
---
### 3. Step‑by‑step
1. **Light enters the leaf**
Sunlight (or artificial light) penetrates the leaf surface. The chlorophyll molecules inside the chloroplast absorb photons of the red‑green part of the spectrum (≈400–700 nm).
2. **Photon energy is captured**
Each photon carries a certain amount of energy. When a chlorophyll molecule absorbs a photon, the energy is transferred to the excited state of the *excated* electron.
3. **Excited electron moves**
The excited electron is transferred to the *excited* state of the *car‑t‑p* enzyme (the “red‑green” reaction centre). The enzyme has two sub‑units:
* **P** (phph‑ph‑ph) – the “ph” part that accepts the photon.
* **N** (nit‑nit‑nit) – the “nit” part that accepts the electron.
The electron moves from P to N, creating a “chemical” potential difference (a red‑green charge).
4. **Red‑green charge drives the reaction**
The red‑green charge is used to drive the reaction that splits two molecules of water (H₂O) into one molecule of oxygen (O₂) and one molecule of hydrogen (H₂). The
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes and mechanism.<|end|><|start|>assistant<|channel|>final<|message|>**The “greenhouse effect” (or simply the greenhouse effect) is the phenomenon that the Earth’s atmosphere traps heat, keeping the surface warmer than it would be if the atmosphere were transparent to all radiation.**
---
## 1. What causes it?
| **Key factors** | **How they contribute** |
|---|---|
| **Atmospheric composition** | The atmosphere contains gases that are very good at absorbing and re‑emtrating infrared (IR) radiation. The main gases are:
* **Water vapor (H₂O)** – abundant in the lower tro‑m, highly absorbing in the IR.
* **Carbon dioxide (CO₂)** – a strong greenhouse gas, especially in the 10–15 µm window.
* **Methane (CH₄) and nit‑ox (N₂O)** – also absorb IR. | These gases absorb incoming IR from the Earth’s surface, heat them up, and then re‑emit the same IR back to space. |
| **O‑zone and aerosols** | O‑zone absorbs UV, but also scatters visible light. Aerosols (dust, clouds, soot) scatter and absorb visible/IR light, adding a “back‑reflection” that warms the surface. | |
| **Radiative imbalance** | The Earth receives more solar radiation than it emits. The greenhouse effect is the *feedback* that keeps the surface warmer, which in turn increases the amount of greenhouse gases (e.g., more water vapor, more CO₂). | |
The main driver is the **absorption–re‑emission cycle** of the greenhouse gases.
---
## 2. How it works – the radiative transfer picture
1. **Solar radiation enters the atmosphere**
* Short‑wave (UV, visible) is largely absorbed by the upper‑tro‑m and reflected by clouds.
* Long‑wave (IR) penetrates deeper, reaching the surface.
2. **Surface emits IR**
The Earth’s surface emits a black‑body spectrum (≈ 10 µm peak). This IR travels upward through the atmosphere.
3. **Absorption by greenhouse gases**
As the IR passes, gases like CO₂, H₂O, CH₄ absorb specific wavelengths. The energy is transferred to the gas molecules, raising their temperature.
4. **Re
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double helix, 5 strands, 10 base pairs, etc. Function: genetic information, coding, replication, transcription, etc.<|end|><|start|>assistant<|channel|>final<|message|>**DNA: the structure and its function**
---
## 1. The DNA molecule – a double‑helix of 5 strands
| **Key features** | **What does it do?** |
|---|---|
| **Double‑helix** – two strands wind around each other, giving a “hand‑crossed” shape. | Provides a compact, stable structure that protects the genetic information. |
| **5 strands** – 5 parallel strands of the sugar backbone (phosphodiol). | Gives a “hand‑crossed” arrangement that allows the base‑pairs to be read from one side. |
| **10 base‑pairs** – 5 complementary base‑pairs (A–T, G–C) on each strand. | Each base‑pair is a “code” that tells the protein what it should do. |
| **Sugar backbone** – de‑phosphoryated sugar (d‑de‑phosphoryated). | Provides a scaffold that keeps the DNA stable and ready for replication. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will read the DNA. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will read the DNA. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will read the DNA. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will read the DNA. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will read the DNA. |
| **Base‑pairs** – 5 complementary pairs (A–T, G–C). | The complementary pairs are the “code” that can be read by the enzymes that will
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
lautan/blockassist-bc-gentle_patterned_goat_1755054204
|
lautan
| 2025-08-13T03:28:21Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:28:17Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BinBashir/distil-base_on_jumia_dataset
|
BinBashir
| 2025-08-13T03:28:14Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T03:27:50Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ginic/hyperparam_tuning_1_wav2vec2-large-xlsr-53-buckeye-ipa
|
ginic
| 2025-08-13T03:28:07Z
| 0
| 0
| null |
[
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-08-13T01:28:28Z
|
---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus.
For details about specific model parameters, please view the config.json here or
training scripts in the scripts/buckeye_experiments folder of the GitHub repository.
# Experiment Details
The best performing model from hyperparameter tuning experiments (batch size, learning rat, base model to fine tune). Vary the random seed to select training data while keeping an even 50/50 gender split to measure statistical significance of changing training data selection. Retrain with the same model parameters, but different data seeding to measure statistical significance of data seed, keeping 50/50 gender split.
Goals:
- Choose initial hyperparameters (batch size, learning rat, base model to fine tune) based on validation set performance
- Establish whether data variation with the same gender makeup is statistically significant in changing performance on the test set (first data_seed experiment)
|
aditeyabaral-redis/langcache-reranker-v1-test
|
aditeyabaral-redis
| 2025-08-13T03:27:52Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"text-classification",
"sentence-pair-classification",
"semantic-similarity",
"semantic-search",
"retrieval",
"reranking",
"generated_from_trainer",
"dataset_size:1047690",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"dataset:aditeyabaral-redis/langcache-sentencepairs",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-reranker-modernbert-base",
"base_model:finetune:Alibaba-NLP/gte-reranker-modernbert-base",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-12T15:58:38Z
|
---
language:
- en
license: apache-2.0
tags:
- cross-encoder
- sentence-transformers
- text-classification
- sentence-pair-classification
- semantic-similarity
- semantic-search
- retrieval
- reranking
- generated_from_trainer
- dataset_size:1047690
- loss:BinaryCrossEntropyLoss
base_model: Alibaba-NLP/gte-reranker-modernbert-base
datasets:
- aditeyabaral-redis/langcache-sentencepairs
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: Redis fine-tuned CrossEncoder model for semantic caching on LangCache
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: val
type: val
metrics:
- type: accuracy
value: 0.7926234384295062
name: Accuracy
- type: accuracy_threshold
value: 0.15342211723327637
name: Accuracy Threshold
- type: f1
value: 0.7190978748012143
name: F1
- type: f1_threshold
value: 0.0023584640584886074
name: F1 Threshold
- type: precision
value: 0.6663987138263665
name: Precision
- type: recall
value: 0.7808477237048665
name: Recall
- type: average_precision
value: 0.8075385363531609
name: Average Precision
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: test
type: test
metrics:
- type: accuracy
value: 0.7230292965285952
name: Accuracy
- type: accuracy_threshold
value: 0.9352303147315979
name: Accuracy Threshold
- type: f1
value: 0.7144263194410831
name: F1
- type: f1_threshold
value: 0.9142870903015137
name: F1 Threshold
- type: precision
value: 0.6302559284880577
name: Precision
- type: recall
value: 0.8245437616387337
name: Recall
- type: average_precision
value: 0.6906882331078481
name: Average Precision
---
# Redis fine-tuned CrossEncoder model for semantic caching on LangCache
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) on the [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for sentence pair classification.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [Alibaba-NLP/gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) <!-- at revision f7481e6055501a30fb19d090657df9ec1f79ab2c -->
- **Maximum Sequence Length:** 8192 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("aditeyabaral-redis/langcache-reranker-v1-test")
# Get scores for pairs of texts
pairs = [
["He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .", '" The foodservice pie business does not fit our long-term growth strategy .'],
['Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war .', 'His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war .'],
['The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat .', 'The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent .'],
['The AFL-CIO is waiting until October to decide if it will endorse a candidate .', 'The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries .'],
['No dates have been set for the civil or the criminal trial .', 'No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty .'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
"He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .",
[
'" The foodservice pie business does not fit our long-term growth strategy .',
'His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war .',
'The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent .',
'The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries .',
'No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty .',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Classification
* Datasets: `val` and `test`
* Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator)
| Metric | val | test |
|:----------------------|:-----------|:-----------|
| accuracy | 0.7926 | 0.723 |
| accuracy_threshold | 0.1534 | 0.9352 |
| f1 | 0.7191 | 0.7144 |
| f1_threshold | 0.0024 | 0.9143 |
| precision | 0.6664 | 0.6303 |
| recall | 0.7808 | 0.8245 |
| **average_precision** | **0.8075** | **0.6907** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### LangCache Sentence Pairs (all)
* Dataset: [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs)
* Size: 8,405 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 28 characters</li><li>mean: 116.35 characters</li><li>max: 227 characters</li></ul> | <ul><li>min: 15 characters</li><li>mean: 113.13 characters</li><li>max: 243 characters</li></ul> | <ul><li>0: ~45.80%</li><li>1: ~54.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .</code> | <code>" The foodservice pie business does not fit our long-term growth strategy .</code> | <code>1</code> |
| <code>Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war .</code> | <code>His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war .</code> | <code>0</code> |
| <code>The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat .</code> | <code>The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent .</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### LangCache Sentence Pairs (all)
* Dataset: [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs)
* Size: 8,405 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 28 characters</li><li>mean: 116.35 characters</li><li>max: 227 characters</li></ul> | <ul><li>min: 15 characters</li><li>mean: 113.13 characters</li><li>max: 243 characters</li></ul> | <ul><li>0: ~45.80%</li><li>1: ~54.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>He said the foodservice pie business doesn 't fit the company 's long-term growth strategy .</code> | <code>" The foodservice pie business does not fit our long-term growth strategy .</code> | <code>1</code> |
| <code>Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war .</code> | <code>His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war .</code> | <code>0</code> |
| <code>The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat .</code> | <code>The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent .</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 48
- `per_device_eval_batch_size`: 48
- `learning_rate`: 0.0001
- `num_train_epochs`: 30
- `warmup_steps`: 100
- `load_best_model_at_end`: True
- `optim`: adamw_torch
- `push_to_hub`: True
- `hub_model_id`: aditeyabaral-redis/langcache-reranker-v1-test
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 48
- `per_device_eval_batch_size`: 48
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 30
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 100
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: aditeyabaral-redis/langcache-reranker-v1-test
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: True
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | val_average_precision | test_average_precision |
|:----------:|:--------:|:-------------:|:---------------:|:---------------------:|:----------------------:|
| -1 | -1 | - | - | 0.7676 | 0.6907 |
| 0.1833 | 1000 | 0.2684 | 0.5554 | 0.7493 | - |
| 0.3666 | 2000 | 0.2089 | 0.5550 | 0.8080 | - |
| 0.5499 | 3000 | 0.1961 | 0.5141 | 0.7877 | - |
| **0.7331** | **4000** | **0.1867** | **0.5248** | **0.8124** | **-** |
| 0.9164 | 5000 | 0.1784 | 0.4801 | 0.7978 | - |
| 1.0997 | 6000 | 0.1679 | 0.5346 | 0.7920 | - |
| 1.2830 | 7000 | 0.1618 | 0.5019 | 0.8133 | - |
| 1.4663 | 8000 | 0.1582 | 0.5201 | 0.8022 | - |
| 1.6496 | 9000 | 0.1585 | 0.4962 | 0.7984 | - |
| 1.8328 | 10000 | 0.1563 | 0.5128 | 0.8126 | - |
| 2.0161 | 11000 | 0.1521 | 0.5456 | 0.8107 | - |
| 2.1994 | 12000 | 0.1394 | 0.4963 | 0.7998 | - |
| 2.3827 | 13000 | 0.1401 | 0.5131 | 0.8184 | - |
| 2.5660 | 14000 | 0.1367 | 0.5617 | 0.7843 | - |
| 2.7493 | 15000 | 0.1373 | 0.4950 | 0.8064 | - |
| 2.9326 | 16000 | 0.1366 | 0.4973 | 0.8164 | - |
| 3.1158 | 17000 | 0.1254 | 0.4966 | 0.8210 | - |
| 3.2991 | 18000 | 0.1234 | 0.5090 | 0.8099 | - |
| 3.4824 | 19000 | 0.123 | 0.5468 | 0.8164 | - |
| 3.6657 | 20000 | 0.1227 | 0.4954 | 0.8063 | - |
| 3.8490 | 21000 | 0.1244 | 0.5662 | 0.8154 | - |
| 4.0323 | 22000 | 0.1204 | 0.5784 | 0.8033 | - |
| 4.2155 | 23000 | 0.1098 | 0.5585 | 0.8121 | - |
| 4.3988 | 24000 | 0.1099 | 0.5348 | 0.8083 | - |
| 4.5821 | 25000 | 0.1108 | 0.5323 | 0.8104 | - |
| 4.7654 | 26000 | 0.1113 | 0.5252 | 0.8207 | - |
| 4.9487 | 27000 | 0.1115 | 0.5277 | 0.8167 | - |
| 5.1320 | 28000 | 0.1007 | 0.5927 | 0.8121 | - |
| 5.3152 | 29000 | 0.0975 | 0.5604 | 0.8038 | - |
| 5.4985 | 30000 | 0.1006 | 0.5244 | 0.8083 | - |
| 5.6818 | 31000 | 0.0995 | 0.5557 | 0.8033 | - |
| 5.8651 | 32000 | 0.1002 | 0.5593 | 0.8138 | - |
| 6.0484 | 33000 | 0.0982 | 0.6090 | 0.8135 | - |
| 6.2317 | 34000 | 0.0875 | 0.5674 | 0.8021 | - |
| 6.4150 | 35000 | 0.09 | 0.5472 | 0.7998 | - |
| 6.5982 | 36000 | 0.0904 | 0.6104 | 0.7981 | - |
| 6.7815 | 37000 | 0.0908 | 0.5609 | 0.8172 | - |
| 6.9648 | 38000 | 0.0902 | 0.5701 | 0.8066 | - |
| 7.1481 | 39000 | 0.082 | 0.6032 | 0.8114 | - |
| 7.3314 | 40000 | 0.0809 | 0.5535 | 0.8059 | - |
| 7.5147 | 41000 | 0.081 | 0.5780 | 0.8046 | - |
| 7.6979 | 42000 | 0.0822 | 0.5586 | 0.7974 | - |
| 7.8812 | 43000 | 0.0821 | 0.6060 | 0.8013 | - |
| 8.0645 | 44000 | 0.0796 | 0.6114 | 0.8040 | - |
| 8.2478 | 45000 | 0.0714 | 0.6591 | 0.7930 | - |
| 8.4311 | 46000 | 0.0736 | 0.6889 | 0.8072 | - |
| 8.6144 | 47000 | 0.0741 | 0.5815 | 0.8100 | - |
| 8.7977 | 48000 | 0.0738 | 0.5969 | 0.7984 | - |
| 8.9809 | 49000 | 0.0756 | 0.5971 | 0.8054 | - |
| 9.1642 | 50000 | 0.0663 | 0.5848 | 0.7981 | - |
| 9.3475 | 51000 | 0.0659 | 0.6570 | 0.7961 | - |
| 9.5308 | 52000 | 0.0662 | 0.6717 | 0.8137 | - |
| 9.7141 | 53000 | 0.0695 | 0.6209 | 0.8066 | - |
| 9.8974 | 54000 | 0.0674 | 0.6954 | 0.8013 | - |
| 10.0806 | 55000 | 0.064 | 0.6644 | 0.8040 | - |
| 10.2639 | 56000 | 0.059 | 0.6721 | 0.8125 | - |
| 10.4472 | 57000 | 0.0588 | 0.6660 | 0.8078 | - |
| 10.6305 | 58000 | 0.0619 | 0.6939 | 0.8068 | - |
| 10.8138 | 59000 | 0.0618 | 0.7245 | 0.8002 | - |
| 10.9971 | 60000 | 0.0622 | 0.6823 | 0.7801 | - |
| 11.1804 | 61000 | 0.0523 | 0.7342 | 0.8001 | - |
| 11.3636 | 62000 | 0.0534 | 0.6820 | 0.8036 | - |
| 11.5469 | 63000 | 0.0542 | 0.6964 | 0.8098 | - |
| 11.7302 | 64000 | 0.0548 | 0.7954 | 0.8035 | - |
| 11.9135 | 65000 | 0.0559 | 0.6689 | 0.8011 | - |
| 12.0968 | 66000 | 0.0527 | 0.6969 | 0.7989 | - |
| 12.2801 | 67000 | 0.0474 | 0.7043 | 0.8044 | - |
| 12.4633 | 68000 | 0.0483 | 0.7323 | 0.8002 | - |
| 12.6466 | 69000 | 0.0495 | 0.7042 | 0.8040 | - |
| 12.8299 | 70000 | 0.0504 | 0.6867 | 0.7952 | - |
| 13.0132 | 71000 | 0.0502 | 0.7054 | 0.8035 | - |
| 13.1965 | 72000 | 0.0433 | 0.7232 | 0.8018 | - |
| 13.3798 | 73000 | 0.0433 | 0.7136 | 0.8001 | - |
| 13.5630 | 74000 | 0.0439 | 0.7539 | 0.8056 | - |
| 13.7463 | 75000 | 0.0445 | 0.7407 | 0.8046 | - |
| 13.9296 | 76000 | 0.0457 | 0.7245 | 0.8040 | - |
| 14.1129 | 77000 | 0.041 | 0.7193 | 0.8046 | - |
| 14.2962 | 78000 | 0.0395 | 0.7364 | 0.8081 | - |
| 14.4795 | 79000 | 0.039 | 0.7724 | 0.8037 | - |
| 14.6628 | 80000 | 0.041 | 0.7634 | 0.8098 | - |
| 14.8460 | 81000 | 0.04 | 0.8067 | 0.8025 | - |
| 15.0293 | 82000 | 0.0405 | 0.7834 | 0.8051 | - |
| 15.2126 | 83000 | 0.0348 | 0.7720 | 0.8032 | - |
| 15.3959 | 84000 | 0.0357 | 0.8101 | 0.7996 | - |
| 15.5792 | 85000 | 0.0355 | 0.7616 | 0.8019 | - |
| 15.7625 | 86000 | 0.0366 | 0.7894 | 0.8012 | - |
| 15.9457 | 87000 | 0.036 | 0.7818 | 0.8038 | - |
| 16.1290 | 88000 | 0.0323 | 0.8652 | 0.7986 | - |
| 16.3123 | 89000 | 0.0332 | 0.8401 | 0.7964 | - |
| 16.4956 | 90000 | 0.0321 | 0.8249 | 0.8074 | - |
| 16.6789 | 91000 | 0.0317 | 0.8513 | 0.8020 | - |
| 16.8622 | 92000 | 0.0328 | 0.8312 | 0.7980 | - |
| 17.0455 | 93000 | 0.0316 | 0.8559 | 0.7988 | - |
| 17.2287 | 94000 | 0.0282 | 0.8739 | 0.7886 | - |
| 17.4120 | 95000 | 0.0287 | 0.8219 | 0.7981 | - |
| 17.5953 | 96000 | 0.0287 | 0.8676 | 0.7976 | - |
| 17.7786 | 97000 | 0.0298 | 0.8016 | 0.8015 | - |
| 17.9619 | 98000 | 0.0287 | 0.8430 | 0.8001 | - |
| 18.1452 | 99000 | 0.025 | 0.9404 | 0.8109 | - |
| 18.3284 | 100000 | 0.0248 | 0.9770 | 0.8091 | - |
| 18.5117 | 101000 | 0.0246 | 0.9018 | 0.8012 | - |
| 18.6950 | 102000 | 0.0265 | 0.8940 | 0.8073 | - |
| 18.8783 | 103000 | 0.026 | 0.9907 | 0.7908 | - |
| 19.0616 | 104000 | 0.0248 | 0.9393 | 0.7939 | - |
| 19.2449 | 105000 | 0.0217 | 0.9408 | 0.7967 | - |
| 19.4282 | 106000 | 0.0226 | 0.8788 | 0.7884 | - |
| 19.6114 | 107000 | 0.023 | 0.9317 | 0.8040 | - |
| 19.7947 | 108000 | 0.0228 | 0.9590 | 0.8019 | - |
| 19.9780 | 109000 | 0.024 | 0.9113 | 0.8053 | - |
| 20.1613 | 110000 | 0.0205 | 0.9336 | 0.8021 | - |
| 20.3446 | 111000 | 0.02 | 0.9258 | 0.8029 | - |
| 20.5279 | 112000 | 0.0196 | 0.9394 | 0.8014 | - |
| 20.7111 | 113000 | 0.0203 | 1.0051 | 0.7971 | - |
| 20.8944 | 114000 | 0.0211 | 0.9888 | 0.8003 | - |
| 21.0777 | 115000 | 0.0198 | 0.9362 | 0.7982 | - |
| 21.2610 | 116000 | 0.0174 | 0.9737 | 0.8005 | - |
| 21.4443 | 117000 | 0.0176 | 1.0159 | 0.7991 | - |
| 21.6276 | 118000 | 0.0182 | 0.9539 | 0.8029 | - |
| 21.8109 | 119000 | 0.0181 | 1.0014 | 0.8016 | - |
| 21.9941 | 120000 | 0.0189 | 0.9849 | 0.7958 | - |
| 22.1774 | 121000 | 0.0158 | 1.0382 | 0.7998 | - |
| 22.3607 | 122000 | 0.0164 | 1.0225 | 0.8043 | - |
| 22.5440 | 123000 | 0.0156 | 1.0930 | 0.8014 | - |
| 22.7273 | 124000 | 0.0151 | 1.0299 | 0.8039 | - |
| 22.9106 | 125000 | 0.0157 | 1.0343 | 0.8042 | - |
| 23.0938 | 126000 | 0.0155 | 1.0141 | 0.8012 | - |
| 23.2771 | 127000 | 0.0135 | 1.0874 | 0.8025 | - |
| 23.4604 | 128000 | 0.0133 | 1.0680 | 0.8070 | - |
| 23.6437 | 129000 | 0.0141 | 1.0249 | 0.8043 | - |
| 23.8270 | 130000 | 0.0137 | 1.0372 | 0.8054 | - |
| 24.0103 | 131000 | 0.0138 | 1.0460 | 0.8035 | - |
| 24.1935 | 132000 | 0.0118 | 1.0537 | 0.8031 | - |
| 24.3768 | 133000 | 0.0114 | 1.1240 | 0.8038 | - |
| 24.5601 | 134000 | 0.0117 | 1.0986 | 0.8071 | - |
| 24.7434 | 135000 | 0.0126 | 1.1335 | 0.8039 | - |
| 24.9267 | 136000 | 0.0114 | 1.1221 | 0.8054 | - |
| 25.1100 | 137000 | 0.0104 | 1.1601 | 0.8036 | - |
| 25.2933 | 138000 | 0.0097 | 1.1563 | 0.8036 | - |
| 25.4765 | 139000 | 0.0093 | 1.1753 | 0.7980 | - |
| 25.6598 | 140000 | 0.0101 | 1.1575 | 0.8090 | - |
| 25.8431 | 141000 | 0.0109 | 1.1202 | 0.8071 | - |
| 26.0264 | 142000 | 0.0099 | 1.2107 | 0.8057 | - |
| 26.2097 | 143000 | 0.0092 | 1.1341 | 0.8077 | - |
| 26.3930 | 144000 | 0.0094 | 1.0927 | 0.8065 | - |
| 26.5762 | 145000 | 0.0091 | 1.1304 | 0.8059 | - |
| 26.7595 | 146000 | 0.0091 | 1.1669 | 0.8092 | - |
| 26.9428 | 147000 | 0.0085 | 1.1665 | 0.8049 | - |
| 27.1261 | 148000 | 0.0077 | 1.2288 | 0.8050 | - |
| 27.3094 | 149000 | 0.008 | 1.2096 | 0.8035 | - |
| 27.4927 | 150000 | 0.0074 | 1.2298 | 0.8008 | - |
| 27.6760 | 151000 | 0.0077 | 1.2366 | 0.8053 | - |
| 27.8592 | 152000 | 0.0076 | 1.2272 | 0.8050 | - |
| 28.0425 | 153000 | 0.0081 | 1.1862 | 0.8058 | - |
| 28.2258 | 154000 | 0.0071 | 1.2412 | 0.8056 | - |
| 28.4091 | 155000 | 0.0066 | 1.2373 | 0.8054 | - |
| 28.5924 | 156000 | 0.0061 | 1.2679 | 0.8069 | - |
| 28.7757 | 157000 | 0.0062 | 1.2753 | 0.8068 | - |
| 28.9589 | 158000 | 0.0069 | 1.2534 | 0.8075 | - |
| 29.1422 | 159000 | 0.0056 | 1.2713 | 0.8067 | - |
| 29.3255 | 160000 | 0.0056 | 1.2816 | 0.8069 | - |
| 29.5088 | 161000 | 0.0058 | 1.2851 | 0.8075 | - |
| 29.6921 | 162000 | 0.0053 | 1.2860 | 0.8077 | - |
| 29.8754 | 163000 | 0.0057 | 1.2878 | 0.8075 | - |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AmanPriyanshu/gpt-oss-8.4b-specialized-science-pruned-moe-only-11-experts
|
AmanPriyanshu
| 2025-08-13T03:26:45Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:26:20Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (11 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~8.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 11 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 11 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 34.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-science-pruned-moe-only-11-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-science-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-science-pruned-moe-only-11-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-science-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need to explain photosynthesis process.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in plants – a “life‑of‑light” process**
---
## 1. What is photosynthesis?
- **Definition**: The conversion of light energy (phototon) into chemical energy (ATP) that plants use to grow, divide, and maintain their life.
- **Key words**: *Light* → *Carbon dioxide* → *Energy* → *Growth*.
---
## 2. The “basic” steps
| Step | What happens | Key molecules | Energy source |
|---|---|---|---|
|1. Light absorption | Chlorophyll‑a & b pigments in the *chloroplast* absorb photons (400–700 nm). | Chlorophyll‑a, chlorophyll‑b, carotines | Light (solar radiation) |
|2. Excitation & energy transfer | Excited chlor‑a → *P* (phroor) → *Q* (quin) | P‑Q system |
|3. Redox reaction | *Q* + *N* (nitro‑ph) → *P* + *N* (nitro‑ph) | NAD⁺ + H⁺ → NADH + H₂O |
|4. Energy release | NADH + H₂O → NAD⁺ + H⁺ + ATP | ATP (chemical energy) |
|5. Transport & storage | ATP → *C* (cyclic) → *P* (phosphate) → *CO₂* | CO₂ (gas) |
---
## 3. The “organ‑cell” view
1. **Leaf structure**
- *Mesophyll* (leaf tissue) contains chlor‑a/b pigments.
- *Stomocytic* pores allow water, CO₂, and light to enter.
2. **Chlor‑a/b pigments**
- *Chlor‑a* (red‑green) absorbs light in 400–500 nm.
- *Chlor‑b* (blue‑green) absorbs light in 500–600 nm.
- Together they broaden the spectrum.
3. **Enzymatic system**
- *P* (phroor) and *Q* (quin) are the two redox centers.
- *N* (nitro‑ph) is the electron‑accepting site.
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need to explain greenhouse effect: greenhouse gases, greenhouse effect, how it works. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>**What causes the greenhouse effect and how it works?**
---
## 1. What is the “greenhouse effect” (or “greenhouse effect” in climate science)?
The term *greenhouse effect* (often called *greenhouse effect* or *greenhouse effect* in the context of climate science) refers to the **feedback loop that amplates the warming of the Earth’s surface and atmosphere**. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the climate‑science literature. It is the same to the *greenhouse effect* that is called the *greenhouse effect* in the
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double hel hel, 5 base pairs, sugar phosphate backbone, etc. Function: genetic information, coding, replication, transcription, etc.<|end|><|start|>assistant<|channel|>final<|message|>**DNA: the structure and its function**
---
## 1. The DNA molecule – its architecture
| **Chemical formula**:
**Deoxyrid**: 5‑de‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑ph‑3‑
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755053815
|
mang3dd
| 2025-08-13T03:24:35Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:24:32Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-7.2b-specialized-science-pruned-moe-only-9-experts
|
AmanPriyanshu
| 2025-08-13T03:22:47Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:22:26Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (9 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 9 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 9 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 28.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-science-pruned-moe-only-9-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-science-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-science-pruned-moe-only-9-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-science-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need explain photosynthesis process.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis in plants**
(1) Light (sun, moon, etc.) → (2) Chlorophyll‑bound‑to‑light → (3) Light‑absor‑by‑energy → (4) Energy‑converted‑to‑chemical‑energy → (5) Chemical‑energy‑used‑by‑photosynthetic‑reaction → (6) Reaction‑produ‑by‑oxygen‑and‑carbon → (7) Oxygen‑and‑carbon‑released → (8) Carbon‑and‑oxygen‑used‑by‑plant‑metabolism → (9) Plant‑growth, photosynthesis, etc.
**Key steps**:
- **Light absorption**: Chlorophyll‑bound‑to‑light (e.g., chlorophyll‑a, chlorophyll‑b) absorbs visible light (400–700 nm).
- **Energy conversion**: Excients (e.g., S₁, S₂) transfer energy to the ground state (S₀).
- **Chemical reaction**: Excients transfer energy to the ground state (S₀) → reaction (e.g., H₂O + CO₂).
- **Oxygen‑and‑carbon‑release**: The plant uses the reaction to produce oxygen and carbon.
- **Plant‑growth**: The plant uses the reaction to grow, produce photosynthetic energy.
**Result**: The plant uses light to produce oxygen and carbon, enabling growth and metabolism.<|return|>
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes and how works.<|end|><|start|>assistant<|channel|>final<|message|>### What causes the greenhouse effect and how it works
---
## 1. What causes the greenhouse effect
|**Key**: The greenhouse effect is a result of the **greenhouse gas** (GHG) that is trapped in the atmosphere, and it amplates the radi‑energy that the Earth receives from the Sun.
|**Why**: The atmosphere is a thin layer of air that surrounds the Earth.
|**What**: The greenhouse gas (GHG) is a mixture of gases that are very good for absorbing and re‑emitting infrared radiation.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**Why**: The GGH is a “greenhouse” because it traps the heat that the Earth receives from the Sun.
|**What**: The GGH is a “green
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double heline, 5 nucle bases, 3 base pairs, 4 strands, 3 base pairs. Function: coding, replication, signaling, etc. Provide details.<|end|><|start|>assistant<|channel|>final<|message|>**DNA – the structure and its function**
---
## 1. What is DNA?
DNA (de‑nucle‑acid‑gen‑a‑molecular‑structure) is a **double‑heline** of 5 nucle‑bases (A, C, G, T) that make a **four‑strand** structure.
It is a **molecular** code that can be read by a **base‑pair** of 3 nucle‑bases (A‑T, C‑G, G‑T).
| **Structure** | **Function** |
|---|---|
| 5 nucle‑bases (A, C, G, T) | 4 strands (A‑T, C‑G, G‑T, T‑A) |
| 3 base‑pairs (A‑T, C‑G, G‑T) | 1‑step code (A‑T, C‑G, G‑T) |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline |
| 2‑fold double‑heline | 3‑fold triple‑heline
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
bonnieliu2002/act_collect_empty_bottle
|
bonnieliu2002
| 2025-08-13T03:22:30Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:bonnieliu2002/collect_empty_bottle",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-13T03:21:58Z
|
---
datasets: bonnieliu2002/collect_empty_bottle
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
JheiKrauzer/blockassist-bc-winged_nimble_bear_1755054876
|
JheiKrauzer
| 2025-08-13T03:21:55Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged nimble bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:21:29Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged nimble bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755055215
|
IvanJAjebu
| 2025-08-13T03:21:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:21:12Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-6.6b-specialized-science-pruned-moe-only-8-experts
|
AmanPriyanshu
| 2025-08-13T03:21:04Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:20:42Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (8 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 8 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~6.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 8 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 8 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 25.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 8 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 8 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.6b-specialized-science-pruned-moe-only-8-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-science-pruned-moe-only-8-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.6b-specialized-science-pruned-moe-only-8-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-science-pruned-moe-only-8-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need to explain the process of photosynthesis in plants. Provide detailed explanation of light absorption, energy transfer, electron-phonoton, etc. Provide steps. Provide diagrams. Provide mention of chlorophyll, etc. Provide mention of photosynthetic reactions. Provide mention of photosol, etc. Provide mention of photosynthesis. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide mention of photos. Provide
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>We need to explain greenhouse effect: greenhouse gases, greenhouse effect, greenhouse effect. The user wants: "What causes the greenhouse effect and how does it work?" They likely want explanation of greenhouse effect (global warming) and how it works. Provide explanation of greenhouse gases, greenhouse effect, how it works. Provide details. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases cause warming, greenhouse effect is a positive feedback loop. Provide explanation of greenhouse effect: greenhouse gases
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>The user wants a description of structure and function of DNA. They want to explain what DNA is made, its structure, and its function. They want to explain the structure of DNA, like base atoms, the arrangement of protons, etc. They want to explain the function of DNA, like genetic coding, replication, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc. They want to explain the structure of DNA, like the base, the arrangement of protons, etc. They want to explain the function of DNA, like the genetic coding, etc
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
hamid1232/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_wiry_llama
|
hamid1232
| 2025-08-13T03:19:29Z
| 57
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am ravenous_wiry_llama",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T23:08:17Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am ravenous_wiry_llama
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755054975
|
IvanJAjebu
| 2025-08-13T03:17:25Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:17:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/sam-anime-v10-sdxl
|
John6666
| 2025-08-13T03:17:08Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:45:07Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- noobai
- illustrious
base_model: Laxhar/noobai-XL-1.0
---
Original model is [here](https://civitai.com/models/1862128/sam-anime?modelVersionId=2107521).
This model created by [toya_san](https://civitai.com/user/toya_san).
|
BytedanceDouyinContent/SAIL-VL-1d7-Thinking-8B-2507
|
BytedanceDouyinContent
| 2025-08-13T03:15:05Z
| 0
| 0
| null |
[
"safetensors",
"internvl_chat",
"custom_code",
"region:us"
] | null | 2025-08-13T03:07:37Z
|
## Introduction
Introducing **SAIL-VL-1.7-Thinking-2507**, our latest reasoning model that achieves SOTA on the OpenCompass reasoning benchmark among comparably-sized models. Its architecture combines a SAILVIT vision encoder with the Qwen3-2B/7B language model, trained using the DAPO algorithm on a curated dataset of over 70,000 multimodal STEM examples. We are releasing this model open-source to facilitate community.
## Performance
| Model | Size | Average | DynaMath | LogicVista | MathVerse | MathVision | WeMath | MathVista_MINI |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| VLAA-Thinker-3B (Previous SOTA) | 3B | 35.4 | 18.2 | 38.5 | 36.4 | 24.4 | **33.8** | 61.0 |
| InternVL3-2B | 2B | 29.1 | 14.8 | 34.7 | 24.5 | 20.2 | 22.9 | 57.6 |
| Qwen2.5-VL-3B | 3B | 31.8 | 13.2 | **40.3** | 31.2 | 21.9 | 22.9 | 61.2 |
| **SAIL-VL-1.7-Thinking-2B-2507** | **2B** | **36.2** | **19.4** | 35.8 | **42.3** | **24.5** | 27.4 | **67.7** |
| WeThink-7B (Previous SOTA) | 8B | 44.3 | 24.8 | **51.2** | 44.2 | 26.0 | **48.0** | 71.7 |
| InternVL3-8B | 8B | 41.4 | 25.7 | 44.5 | 38.5 | 30.0 | 39.5 | 70.5 |
| Qwen2.5-VL-7B | 7B | 40.1 | 21.8 | 47.9 | 41.1 | 25.4 | 36.2 | 68.1 |
| **SAIL-VL-1.7-Thinking-8B-2507** | **8B** | **45.8** | **29.6** | 43.6 | **57.1** | **31.7** | 39.62 | **73.4** |
## Inference
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.6.0, and transformers=4.52.3 as the development environment.
```python
import torch
from transformers import AutoTokenizer, AutoModel, AutoProcessor
from PIL import Image
model_path = "your model path"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
device = torch.cuda.current_device()
model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16,).to(device)
print("##### with images")
messages = [
{"role": "user", "content": [{"type": "image", "image": 'image_path'},
{"type": "text", "text": "describe the image"}]}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
image_path = 'your image path'
image = Image.open(image_path)
inputs = processor(images=image, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
generated_ids = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
response = response.split('<|im_end|>')[0].strip()
print(response)
print("##### without images")
messages = [
{
"role": "user",
"content": [{"type": "text", "text": "中国的首都是哪里?"}]
}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = processor(images=None, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
generated_ids = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
response = response.split('<|im_end|>')[0].strip()
print(response)
```
## License
This project is licensed under [Apache License 2.0](LICENSE).
## Contact
If you have any question, please feel free to contact us: [email protected]
|
phospho-app/Selinaliu1030-gr00t-egg_0806-ajaw3
|
phospho-app
| 2025-08-13T03:13:56Z
| 0
| 0
|
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:Selinaliu1030/egg_0806",
"region:us"
] |
robotics
| 2025-08-13T02:05:08Z
|
---
datasets: Selinaliu1030/egg_0806
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1117, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1128, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1296, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1133, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [Selinaliu1030/egg_0806](https://huggingface.co/datasets/Selinaliu1030/egg_0806)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
suhwan3/Linq-Embed-Mistral-lora-batch12-checkpoint-700
|
suhwan3
| 2025-08-13T03:12:46Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Linq-AI-Research/Linq-Embed-Mistral",
"base_model:adapter:Linq-AI-Research/Linq-Embed-Mistral",
"region:us"
] | null | 2025-08-13T03:12:42Z
|
---
base_model: Linq-AI-Research/Linq-Embed-Mistral
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
lmq1909/Qwen2.5-VL-7B-LQA-3e
|
lmq1909
| 2025-08-13T03:09:37Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-13T03:04:22Z
|
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** lmq1909
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vcmt794/DBT_Large_AdamW_stage_1
|
vcmt794
| 2025-08-13T03:09:31Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T03:08:33Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmanPriyanshu/gpt-oss-2.4b-specialized-science-pruned-moe-only-1-experts
|
AmanPriyanshu
| 2025-08-13T03:09:18Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:09:07Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (1 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 1 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~2.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 1 |
| **Layers** | 24 |
| **Top-k Routing** | 1 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 1 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 3.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 1 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 1 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-science-pruned-moe-only-1-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-science-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-science-pruned-moe-only-1-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-science-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
. The software of 0 7- 1 7- 7
0.
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
def software
– 0-
3 3. 4- 4- 4. page.
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
page.
page page. 4- 4- to to to to to (
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000_800
|
FrontierInstruments
| 2025-08-13T03:09:03Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"grpo",
"protocol-completion",
"fine-tuned",
"conversational",
"base_model:FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000",
"base_model:finetune:FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T03:06:31Z
|
---
license: llama2
base_model: FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000
tags:
- llama
- grpo
- protocol-completion
- fine-tuned
library_name: transformers
pipeline_tag: text-generation
---
# Fine-tuned Llama Model with GRPO
This model is a fine-tuned version of [FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000](https://huggingface.co/FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000) using Group Relative Policy Optimization (GRPO).
## Training Details
- **Base Model**: FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000
- **Training Method**: GRPO (Group Relative Policy Optimization)
- **Training Steps**: 1000
- **Dataset**: Protocol completion task dataset (FULL)
- **Hardware**: 8x GPU distributed training with DeepSpeed ZeRO-3
- **Precision**: FP16
## Model Description
This model has been fine-tuned specifically for protocol completion tasks, using custom reward functions that evaluate:
- Semantic correctness of protocol steps
- Proper XML format adherence
- Step-by-step reasoning quality
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000_800",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("FrontierInstruments/grpo_softstartedp1_700_bioprobench_1000_800")
# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Configuration
- LoRA rank: 32
- LoRA alpha: 64
- Learning rate: 1e-5
- Batch size: 8 (effective)
- Max sequence length: 1024
- Beta (GRPO): 0.05
|
sakuhara/blockassist-bc-marine_clawed_komodo_1755054438
|
sakuhara
| 2025-08-13T03:08:58Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine clawed komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:08:32Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine clawed komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-20.9b-specialized-all-pruned-moe-only-32-experts
|
AmanPriyanshu
| 2025-08-13T03:07:42Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:06:43Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (32 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 32 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~20.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 32 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 32 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 100.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 32 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 32 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-all-pruned-moe-only-32-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-all-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-all-pruned-moe-only-32-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-all-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What Is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science that seeks to create systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Human‑like AI Capability |
|------|--------------------------|
| Understanding language | Natural Language Processing (NLP) |
| Seeing and recognizing objects | Computer Vision |
| Making decisions | Reasoning, Planning, Decision‑Making |
| Learning from experience | Machine Learning (ML) |
| Interacting socially | Dialogue Systems, Chatbots |
In short, AI is about building machines that can **observe, reason, learn, and act** in ways that mimic or surpass human cognition.
---
## How Does AI Work? – The Core Ingredients
| Ingredient | What It Does | Typical Techniques |
|------------|--------------|--------------------|
| **Data** | The raw material from which AI learns. | Images, text, sensor readings, logs, etc. |
| **Algorithms** | Rules or procedures that process data. | Linear regression, decision trees, neural networks, reinforcement learning, etc. |
| **Computational Power** | Hardware that runs the algorithms fast enough. | GPUs, TPUs, distributed clusters. |
| **Evaluation** | Metrics to judge performance. | Accuracy, F1‑score, BLEU, etc. |
| **Deployment** | Making the model usable in real systems. | APIs, edge devices, cloud services. |
### 1. Data Collection & Pre‑processing
- **Collect**: Gather raw data from sensors, databases, the web, etc.
- **Clean**: Remove noise, handle missing values, correct errors.
- **Transform**: Normalize, encode categorical variables, augment (e.g., rotate images).
### 2. Choosing a Model
- **Rule‑Based Systems**: Hand‑crafted if‑then rules (early AI, expert systems).
- **Statistical Models**: Logistic regression, SVMs, decision trees.
- **Neural Networks**: Deep learning models (CNNs for vision, RNNs/Transformers for language).
- **Reinforcement Learning**: Agents learn by interacting with an environment.
### 3. Training
- **Objective Function**: Define what “good” means (e.g., minimize error).
- **Optimization**: Use gradient descent or variants to adjust model parameters.
- **Regularization**: Prevent overfitting (
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s theory of relativity** is really two ideas that changed how we think about space, time, and gravity. Think of them as two “rules” that tell us how the universe behaves when things move fast or are near massive objects.
---
## 1. Special Relativity (1905)
### The “speed‑of‑light” rule
- **Light always moves at the same speed** (≈ 299 792 km/s) no matter how fast you’re moving or where you’re looking from.
- Because light’s speed is fixed, the usual rules of everyday physics (like adding speeds) don’t work the same way.
### Consequences (in plain language)
| What you’d expect | What actually happens |
|-------------------|-----------------------|
| If you run toward a flashlight, the light should be faster than if you’re standing still. | The light’s speed is the same for both of you. |
| Time is the same for everyone. | **Time slows down** for a fast‑moving person (time dilation). |
| Lengths are the same for everyone. | **Lengths shrink** in the direction of motion (length contraction). |
| Mass is constant. | **Mass increases** as you approach light speed (relativistic mass). |
**Why it matters:**
- Explains why GPS satellites need to account for both their speed and the Earth’s gravity to stay accurate.
- Shows that “simultaneity” (whether two events happen at the same time) depends on the observer’s motion.
---
## 2. General Relativity (1915)
### Gravity is not a force, it’s geometry
- Massive objects (planets, stars, black holes) **curve the fabric of space‑time**.
- Objects move along the straightest possible paths in this curved space‑time, which we feel as gravity.
### Visualizing it
- Imagine a heavy ball on a stretched rubber sheet.
- The sheet dips around the ball.
- A smaller marble rolled nearby will spiral toward the ball, not because a force pulls it, but because the sheet is curved.
### Consequences
| Classic idea | Relativistic idea |
|--------------|-------------------|
| Gravity pulls objects with a force. | Gravity is the shape of space‑time. |
| Light travels in straight lines. | Light follows curved paths (gravitational lensing). |
| Time runs
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functioning.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “democracy” means today
Modern democracies are usually **constitutional republics** that combine:
| Feature | What it looks like in practice | Why it matters |
|---------|--------------------------------|----------------|
| **Rule of law** | A written constitution, independent courts, and clear statutes | Prevents arbitrary power |
| **Separation of powers** | Executive, legislature, judiciary act independently | Checks and balances |
| **Free and fair elections** | Universal adult suffrage, secret ballot, independent electoral commissions | Legitimacy of the government |
| **Political pluralism** | Multiple parties, free media, civil society | Representation of diverse views |
| **Protection of rights** | Civil, political, economic, social, and cultural rights | Guarantees individual dignity |
| **Accountability mechanisms** | Audits, ombudsmen, anti‑corruption bodies | Keeps officials answerable |
---
### 2. How the system works day‑to‑day
| Stage | Key actors | Typical actions | Example |
|-------|------------|-----------------|---------|
| **Policy agenda setting** | Citizens, NGOs, interest groups, media | Raise issues, lobby, run campaigns | Climate‑change protests → policy debate |
| **Legislative process** | Parliament/ congress, committees, parties | Draft bills, debate, amend, vote | Health‑care bill → committee review → floor vote |
| **Executive implementation** | President/prime minister, ministries | Sign laws, issue regulations, allocate budgets | New tax law → Ministry of Finance publishes regulations |
| **Judicial review** | Courts, judges | Interpret laws, hear challenges | Constitutional challenge to a law → Supreme Court ruling |
| **Oversight & accountability** | Auditors, ombudsmen, investigative journalists | Audit budgets, investigate misconduct | Audit report on public spending → media coverage |
| **Public participation** | Citizens, elections, referenda, public consultations | Vote, petition, attend town halls | Referendum on constitutional amendment |
---
### 3. The electoral cycle
1. **Election day** – Voters cast ballots in a secret, free vote.
2. **Counting & certification** – Electoral commission tallies votes, resolves disputes.
3. **Formation of government** – The party or coalition with a majority forms the cabinet.
4. **Mandate period** – Usually 4–5 years; the government
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
BytedanceDouyinContent/SAIL-VL-1d7-Thinking-2B-2507
|
BytedanceDouyinContent
| 2025-08-13T03:06:39Z
| 0
| 0
| null |
[
"safetensors",
"internvl_chat",
"custom_code",
"region:us"
] | null | 2025-08-13T03:04:24Z
|
## Introduction
Introducing **SAIL-VL-1.7-Thinking-2507**, our latest reasoning model that achieves SOTA on the OpenCompass reasoning benchmark among comparably-sized models. Its architecture combines a SAILVIT vision encoder with the Qwen3-2B/7B language model, trained using the DAPO algorithm on a curated dataset of over 70,000 multimodal STEM examples. We are releasing this model open-source to facilitate community.
## Performance
| Model | Size | Average | DynaMath | LogicVista | MathVerse | MathVision | WeMath | MathVista_MINI |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| VLAA-Thinker-3B (Previous SOTA) | 3B | 35.4 | 18.2 | 38.5 | 36.4 | 24.4 | **33.8** | 61.0 |
| InternVL3-2B | 2B | 29.1 | 14.8 | 34.7 | 24.5 | 20.2 | 22.9 | 57.6 |
| Qwen2.5-VL-3B | 3B | 31.8 | 13.2 | **40.3** | 31.2 | 21.9 | 22.9 | 61.2 |
| **SAIL-VL-1.7-Thinking-2B-2507** | **2B** | **36.2** | **19.4** | 35.8 | **42.3** | **24.5** | 27.4 | **67.7** |
| WeThink-7B (Previous SOTA) | 8B | 44.3 | 24.8 | **51.2** | 44.2 | 26.0 | **48.0** | 71.7 |
| InternVL3-8B | 8B | 41.4 | 25.7 | 44.5 | 38.5 | 30.0 | 39.5 | 70.5 |
| Qwen2.5-VL-7B | 7B | 40.1 | 21.8 | 47.9 | 41.1 | 25.4 | 36.2 | 68.1 |
| **SAIL-VL-1.7-Thinking-8B-2507** | **8B** | **45.8** | **29.6** | 43.6 | **57.1** | **31.7** | 39.62 | **73.4** |
## Inference
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.6.0, and transformers=4.52.3 as the development environment.
```python
import torch
from transformers import AutoTokenizer, AutoModel, AutoProcessor
from PIL import Image
model_path = "your model path"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
device = torch.cuda.current_device()
model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16,).to(device)
print("##### with images")
messages = [
{"role": "user", "content": [{"type": "image", "image": 'image_path'},
{"type": "text", "text": "describe the image"}]}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
image_path = 'your image path'
image = Image.open(image_path)
inputs = processor(images=image, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
generated_ids = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
response = response.split('<|im_end|>')[0].strip()
print(response)
print("##### without images")
messages = [
{
"role": "user",
"content": [{"type": "text", "text": "中国的首都是哪里?"}]
}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = processor(images=None, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device).to(torch.bfloat16)
generated_ids = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
response = response.split('<|im_end|>')[0].strip()
print(response)
```
## License
This project is licensed under [Apache License 2.0](LICENSE).
## Contact
If you have any question, please feel free to contact us: [email protected]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755052746
|
indoempatnol
| 2025-08-13T03:04:05Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:04:01Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cucucu666/pray-8.12
|
cucucu666
| 2025-08-13T03:03:27Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T09:54:37Z
|
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: Lego male face, Lego style, pleading expression, both hands together
in a prayer pose, plain white background
widget:
- text: Lego male face, Lego style, pleading expression, both hands together in a
prayer pose, plain white background
output:
url: image_0.png
- text: Lego male face, Lego style, pleading expression, both hands together in a
prayer pose, plain white background
output:
url: image_1.png
- text: Lego male face, Lego style, pleading expression, both hands together in a
prayer pose, plain white background
output:
url: image_2.png
- text: Lego male face, Lego style, pleading expression, both hands together in a
prayer pose, plain white background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/pray-8.12
<Gallery />
## Model description
These are cucucu666/pray-8.12 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `Lego male face, Lego style, pleading expression, both hands together in a prayer pose, plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/pray-8.12/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/pray-8.12', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('Lego male face, Lego style, pleading expression, both hands together in a prayer pose, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755052626
|
kojeklollipop
| 2025-08-13T03:02:50Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:02:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755054005
|
IvanJAjebu
| 2025-08-13T03:01:28Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:01:02Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SeungJun3214/wifi-gemma3-model
|
SeungJun3214
| 2025-08-13T03:01:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:04:55Z
|
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: wifi-gemma3-model
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for wifi-gemma3-model
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SeungJun3214/wifi-gemma3-model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lautan/blockassist-bc-gentle_patterned_goat_1755052515
|
lautan
| 2025-08-13T03:00:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:59:57Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755052892
|
Sayemahsjn
| 2025-08-13T02:59:59Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:59:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755052480
|
koloni
| 2025-08-13T02:59:47Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:59:40Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755053780
|
0xGareeb
| 2025-08-13T02:58:23Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:57:36Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/vete-starbliss-ill-v2-sdxl
|
John6666
| 2025-08-13T02:57:55Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fingers",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:50:29Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fingers
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1830445/vete-starbliss-ill?modelVersionId=2107790).
This model created by [Vetehine](https://civitai.com/user/Vetehine).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755053671
|
IvanJAjebu
| 2025-08-13T02:55:56Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:55:28Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-18.5b-specialized-all-pruned-moe-only-28-experts
|
AmanPriyanshu
| 2025-08-13T02:54:53Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:54:01Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (28 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 28 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~18.5B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 28 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 28 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 87.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 28 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 28 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-all-pruned-moe-only-28-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-all-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-all-pruned-moe-only-28-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-all-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science that seeks to build systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Human‑like AI capability | Typical AI technique |
|------|--------------------------|-----------------------|
| Recognizing faces | “I can see a face” | Convolutional Neural Networks (CNNs) |
| Understanding language | “I can read and respond” | Natural Language Processing (NLP) models |
| Playing chess | “I can plan moves” | Search algorithms + reinforcement learning |
| Driving a car | “I can navigate safely” | Sensor fusion + deep reinforcement learning |
| Diagnosing disease | “I can suggest a diagnosis” | Probabilistic models + deep learning |
In short, **AI is about creating machines that can perceive, reason, learn, and act in ways that mimic or surpass human cognition.**
---
## How Does AI Work? (A High‑Level View)
1. **Data Collection & Representation**
- **Input**: Raw data (images, text, sensor readings, etc.).
- **Pre‑processing**: Cleaning, normalizing, augmenting.
- **Feature extraction**: Turning raw data into useful representations (e.g., pixel grids, word embeddings).
2. **Model Selection**
- Choose an algorithm that best fits the problem:
- **Rule‑based systems** (if‑then logic) for simple, deterministic tasks.
- **Statistical models** (e.g., logistic regression, Naïve Bayes) for probabilistic inference.
- **Machine learning** (supervised, unsupervised, reinforcement) for pattern discovery.
- **Deep learning** (neural networks) for high‑dimensional, complex data.
3. **Training / Learning**
- **Supervised learning**: Feed labeled examples; the model learns to map inputs → outputs.
- **Unsupervised learning**: Discover structure without labels (clustering, dimensionality reduction).
- **Reinforcement learning**: Learn by trial‑and‑error, receiving rewards for good actions.
- **Transfer learning**: Fine‑tune a pre‑trained model on a new task.
4. **Inference / Decision Making**
- Once trained, the model processes new inputs and produces outputs (classification, regression, generation, etc.).
-
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s theory of relativity** is really two ideas that changed how we think about space, time, and gravity. Think of them as two “rules” that tell us how the universe behaves when things move fast or are near massive objects.
---
## 1. Special Relativity (1905)
### The two postulates
1. **The laws of physics are the same for everyone who is moving at a constant speed in a straight line (in “inertial” frames).**
– No experiment can tell you whether you’re at rest or moving at a steady speed, as long as you’re not turning or accelerating.
2. **The speed of light in a vacuum is always the same (≈ 299 792 km/s), no matter how fast you’re moving or how fast the light source is moving.**
– Light is a universal speed limit.
### What that means
- **Time slows down for fast‑moving clocks** (time dilation).
If you travel in a spaceship at 99 % the speed of light, your clock ticks slower than a clock on Earth.
- **Lengths contract in the direction of motion** (length contraction).
The spaceship would appear shorter to an observer on Earth.
- **Mass increases with speed** (relativistic mass).
A particle needs more energy to accelerate as it approaches light speed.
- **Energy and mass are interchangeable**: \(E = mc^2\).
A tiny amount of mass can be converted into a huge amount of energy (nuclear power, bombs).
---
## 2. General Relativity (1915)
### The key idea
**Gravity is not a force pulling objects together; it is the curvature of spacetime caused by mass and energy.**
Think of spacetime as a stretchy sheet. A heavy ball (the Sun) sits on the sheet and creates a dip. A smaller ball (Earth) rolls around that dip, following a curved path that we call an orbit.
### Consequences
- **Light bends around massive objects** (gravitational lensing).
This was confirmed during a 1911 solar eclipse.
- **Time runs slower near massive bodies** (gravitational time dilation).
GPS satellites must correct their clocks for this effect to stay accurate.
- **The universe is dynamic**: space can expand or contract, leading to cosmology (Big Bang
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain democratic functioning.<|end|><|start|>assistant<|channel|>final<|message|>### 1. The Core Idea of Democracy
At its heart, democracy is a system of government in which **power ultimately resides with the people**.
Modern democracies translate that principle into a set of institutions, processes, and norms that:
| Element | What it Looks Like in Practice | Why It Matters |
|---------|---------------------------------|----------------|
| **Popular sovereignty** | Citizens vote for representatives or decide on laws directly (referenda). | Ensures that the government reflects the will of the populace. |
| **Rule of law** | Laws are applied equally; no one is above the law. | Prevents arbitrary rule and protects individual rights. |
| **Separation of powers** | Executive, legislative, and judicial branches are distinct and can check one another. | Avoids concentration of power and promotes accountability. |
| **Free and fair elections** | Regular, transparent, competitive elections with universal adult suffrage. | Guarantees that leaders are chosen by the people. |
| **Civil liberties** | Freedom of speech, assembly, press, religion, etc. | Enables citizens to express dissent and hold leaders accountable. |
| **Pluralism & minority rights** | Protection of diverse groups and their interests. | Prevents majoritarian tyranny. |
| **Transparency & accountability** | Open government, public records, independent media, watchdog institutions. | Keeps officials answerable to the public. |
---
### 2. How These Elements Operate in Modern Governments
| Institutional Layer | Typical Mechanisms | Example |
|---------------------|-------------------|---------|
| **Legislature** | Bills are drafted, debated, amended, and voted on. Committees scrutinize details. | U.S. Congress, German Bundestag, Indian Lok Sabha |
| **Executive** | Head of state (president or monarch) and head of government (prime minister) implement laws, manage day‑to‑day governance, and represent the country internationally. | U.S. President, German Chancellor, Japanese Prime Minister |
| **Judiciary** | Courts interpret laws, adjudicate disputes, and can strike down unconstitutional acts. | U.S. Supreme Court, UK Supreme Court, Constitutional Court of South Africa |
| **Electoral Commission / Election Authority** | Organizes elections, ensures fairness, counts votes, resolves disputes. | Election Commission of India, Federal Election Commission (US) |
| **Civil Society & Media** | NGOs, think‑tanks
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755053269
|
hobson123
| 2025-08-13T02:53:44Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:53:29Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ZeroWw/Qwen3-4B-Instruct-2507-GGUF
|
ZeroWw
| 2025-08-13T02:52:06Z
| 0
| 0
| null |
[
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-13T02:44:49Z
|
---
license: mit
language:
- en
pipeline_tag: text-generation
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Wed Aug 13, 02:44:50
|
AmanPriyanshu/gpt-oss-17.9b-specialized-all-pruned-moe-only-27-experts
|
AmanPriyanshu
| 2025-08-13T02:51:50Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:51:00Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (27 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 27 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~17.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 27 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 27 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 84.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 27 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 27 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.9b-specialized-all-pruned-moe-only-27-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-all-pruned-moe-only-27-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.9b-specialized-all-pruned-moe-only-27-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-all-pruned-moe-only-27-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What is Artificial Intelligence?
Artificial Intelligence (AI) is a branch of computer science that focuses on creating systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Typical Human Skill | AI Approach |
|------|---------------------|-------------|
| Recognizing faces | Visual perception | Computer vision, deep learning |
| Understanding language | Natural language comprehension | Natural Language Processing (NLP) |
| Playing chess | Strategic planning | Reinforcement learning, game theory |
| Driving a car | Perception + decision‑making | Sensor fusion + control algorithms |
| Diagnosing disease | Pattern recognition + medical knowledge | Machine learning + knowledge bases |
In short, **AI is about building machines that can perceive, reason, learn, and act**—often in ways that mimic or even surpass human abilities.
---
## How Does AI Work? (A High‑Level View)
1. **Data Collection & Representation**
- **Input**: Raw data (images, text, sensor readings, etc.).
- **Pre‑processing**: Cleaning, normalizing, and converting data into a format the algorithm can use (e.g., pixel arrays, word embeddings, feature vectors).
2. **Model Selection**
- **Rule‑Based Systems**: Hand‑crafted if‑then rules (early AI, expert systems).
- **Statistical Models**: Linear regression, decision trees, Bayesian networks.
- **Machine Learning (ML)**: Algorithms that learn patterns from data.
- **Deep Learning (DL)**: Neural networks with many layers (CNNs, RNNs, Transformers).
3. **Training / Learning**
- **Supervised Learning**: Model learns from labeled examples (e.g., “cat” vs. “dog”).
- **Unsupervised Learning**: Model discovers structure without labels (e.g., clustering).
- **Re‑inforcement Learning**: Model learns by trial‑and‑error, receiving rewards/punishments (e.g., AlphaGo, autonomous driving).
4. **Inference / Decision Making**
- Once trained, the model processes new inputs and outputs predictions or actions.
- In some systems, the output is a probability distribution; in others, a discrete decision.
5. **Feedback & Adaptation**
- **Online Learning**: Continually updates the model as new data arrives.
- **Human‑in‑the‑Loop**
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s theory of relativity** is really two ideas that changed how we think about space, time, and gravity. Think of them as two “rules” that work together to describe the universe.
---
## 1. Special Relativity (1905)
### The two post‑facts
1. **The speed of light is the same for everyone** – no matter how fast you’re moving or how fast the light source is moving, you’ll always measure light’s speed as about 299 million meters per second (≈ 300 000 km/s).
2. **The laws of physics are the same in every inertial (non‑accelerating) reference frame** – whether you’re on a train, in a car, or floating in space, the same equations describe how things behave.
### What that means
- **Time slows down** for a fast‑moving clock.
If you travel close to the speed of light, your watch ticks slower compared to a friend who stays still. (This is called *time dilation*.)
- **Lengths shrink** in the direction of motion.
A fast‑moving ruler looks shorter to a stationary observer. (This is *length contraction*.)
- **Mass increases** with speed.
The faster you go, the harder it is to accelerate you any more. (This is *relativistic mass* or, more modernly, the energy–mass equivalence \(E=mc^2\).)
- **Space and time are linked**.
Events that happen at the same place but at different times can look different to observers moving relative to each other. The “distance” between two events is not just space or just time, but a mix of both.
**Why it matters**
- GPS satellites must correct for these effects to keep our maps accurate.
- Particle accelerators smash particles at near‑light speeds, turning kinetic energy into new particles.
- The famous “twin paradox” (one twin travels at high speed and returns younger) is a real, measurable effect.
---
## 2. General Relativity (1915)
### The big idea
Gravity isn’t a force pulling objects together; it’s the **curvature of spacetime** caused by mass and energy.
- Imagine a heavy ball (the Sun) placed on a rubber sheet (spacetime).
The sheet sags around the ball.
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain democratic functions, institutions, processes.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “democracy” means today
In contemporary politics “democracy” is usually understood as a **pluralist, representative system** in which:
| Core feature | What it looks like in practice | Why it matters |
|--------------|--------------------------------|----------------|
| **Popular sovereignty** | The people are the ultimate source of political authority. | It legitimises the state and protects against arbitrary rule. |
| **Rule of law** | Laws are applied equally, and no one is above the law. | It guarantees predictability, protects rights, and limits executive power. |
| **Separation of powers** | Executive, legislative, and judicial branches are distinct and can check one another. | It prevents concentration of power and protects liberty. |
| **Free and fair elections** | Citizens vote at regular intervals, with secret ballots, independent electoral commissions, and no coercion. | It ensures that leaders reflect the will of the people. |
| **Civil liberties & political rights** | Freedom of speech, assembly, association, and the press. | They allow citizens to hold leaders accountable. |
| **Pluralism & minority protection** | Institutions (e.g., courts, ombudsmen, anti‑discrimination laws) safeguard minority interests. | It keeps democracy inclusive and stable. |
These elements are not static; they evolve with technology, culture, and crises.
---
### 2. How modern democracies are organised
| Branch | Typical institutions | Key functions |
|--------|----------------------|---------------|
| **Executive** | President/Prime Minister, Cabinet, ministries | Implements laws, manages day‑to‑day governance, represents the state internationally. |
| **Legislature** | Parliament (bicameral or unicameral), committees | Drafts, debates, and passes laws; scrutinises the executive; allocates budgets. |
| **Judiciary** | Constitutional court, supreme court, lower courts | Interprets laws, protects constitutional rights, resolves disputes. |
| **Electoral bodies** | Independent electoral commissions | Organises elections, ensures fairness, adjudicates disputes. |
| **Civil society & media** | NGOs, think‑tanks, independent press | Provides information, mobilises citizens, holds power accountable. |
| **Other oversight bodies** | Ombudsman, audit offices, anti‑corruption agencies | Detects and prevents abuse of power. |
---
### 3.
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
judsfdf/copiaestiloperoesfactualperomuyconsiso_trnsferencia_jsu
|
judsfdf
| 2025-08-13T02:50:20Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:49:53Z
|
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** judsfdf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755053324
|
IvanJAjebu
| 2025-08-13T02:50:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:49:40Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tuantranmlv/contractbert_dichvu_phamvidichvu
|
tuantranmlv
| 2025-08-13T02:48:58Z
| 58
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-05T22:27:11Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmanPriyanshu/gpt-oss-17.3b-specialized-all-pruned-moe-only-26-experts
|
AmanPriyanshu
| 2025-08-13T02:48:54Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:48:06Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (26 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 26 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~17.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 26 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 26 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 81.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 26 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 26 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-all-pruned-moe-only-26-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-all-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-17.3b-specialized-all-pruned-moe-only-26-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-all-pruned-moe-only-26-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What Is Artificial Intelligence?
Artificial Intelligence (AI) is a branch of computer science that builds systems capable of performing tasks that normally require human intelligence. These tasks include:
| Human Skill | AI Equivalent |
|-------------|---------------|
| Perception (seeing, hearing) | Computer vision, speech recognition |
| Reasoning & Decision‑Making | Rule‑based systems, probabilistic inference |
| Learning & Adaptation | Machine learning, deep learning |
| Language Understanding | Natural language processing (NLP) |
| Physical Interaction | Robotics, autonomous vehicles |
In short, **AI is about giving machines the ability to observe, understand, learn, and act in ways that mimic or surpass human cognition.**
---
## How AI Works – The Core Ingredients
| Component | What It Does | Typical Algorithms / Models |
|-----------|--------------|------------------------------|
| **Data** | The raw material. AI learns patterns from examples. | Image pixels, text corpora, sensor streams |
| **Representation** | How information is encoded. | Feature vectors, embeddings, graphs |
| **Learning** | Adjusting internal parameters to fit data. | Gradient descent, back‑propagation, reinforcement learning |
| **Inference** | Using the learned model to make predictions or decisions. | Forward pass in a neural net, Bayesian inference |
| **Feedback Loop** | Improving the model over time. | Online learning, active learning, human‑in‑the‑loop |
### 1. **Data Collection & Pre‑processing**
- **Gather**: Raw data from sensors, text, images, etc.
- **Clean**: Remove noise, handle missing values.
- **Transform**: Normalize, encode categorical variables, augment data.
### 2. **Feature Engineering (or Representation Learning)**
- **Manual Features**: Domain experts craft descriptors (e.g., edge detectors in images).
- **Automatic Features**: Deep learning learns hierarchical representations directly from raw data.
### 3. **Model Selection**
| Task | Typical Models |
|------|----------------|
| Classification | Logistic regression, SVM, Random Forest, CNN |
| Regression | Linear regression, Gradient Boosting, RNN |
| Sequence Modeling | RNN, LSTM, Transformer |
| Decision Making | Markov Decision Process, Q‑learning, Policy Gradient |
| Generative Tasks | GANs, VAEs, Diffusion Models |
### 4. **Training (Learning)**
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain English**
---
### 1. Two main ideas
| **Special Relativity** | **General Relativity** |
|------------------------|------------------------|
| Deals with objects moving at constant speeds (especially close to the speed of light). | Deals with gravity and how it bends space and time. |
---
### 2. Special Relativity (1905)
| Concept | What it means |
|---------|---------------|
| **Speed of light is constant** | No matter of how fast you’re moving, light always travels at the same speed (≈ 300 000 km/s). |
| **Time and space are relative** | How fast time ticks and how long distances are depends on how fast you’re moving relative to something else. |
| **Mass grows with speed** | As you approach light speed, you become heavier and need more energy to keep accelerating. |
| **E = mc²** | Energy (E) and mass (m) are interchangeable; a tiny amount of mass can become a huge amount of energy. |
**Why it matters:**
- Explains why a fast‑moving clock runs slower (time dilation).
- Explains why a fast‑moving object’s length appears shortened (length contraction).
- Gives the foundation for modern physics, GPS, particle accelerators, etc.
---
### 3. General Relativity (1915)
| Concept | What it means |
|---------|---------------|
| **Gravity is geometry** | Massive objects (planets, stars, black holes) curve the “fabric” of space‑time. |
| **Objects follow the curves** | A planet orbits the Sun because it’s following the curved path created by the Sun’s mass. |
| **Time runs slower near mass** | Clocks closer to a massive body tick more slowly than clocks farther away. |
| **Light bends around mass** | Light follows the curved space‑time, so it bends when passing near a massive object (gravitational lensing). |
**Why it matters:**
- Predicts the bending of starlight by the Sun (confirmed during the 1917 eclipse).
- Explains why GPS satellites need relativistic corrections.
- Describes black holes, the expansion of the universe, and gravitational waves.
---
### 4. Everyday Takeaway
- **Speed matters:** The faster you go, the more time slows and
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functioning.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “democracy” Means in Practice
Modern democracies are not a single, monolithic system; they are a collection of institutions, norms, and processes that together create a government that is **accountable to its citizens**. The core features that most contemporary democracies share are:
| Feature | Practical Manifestation |
|---------|------------------------|
| **Popular sovereignty** | Citizens elect representatives or vote directly on key issues. |
| **Rule of law** | Laws apply equally; no one is above the law. |
| **Separation of powers** | Legislative, executive, and judicial branches are distinct and can check one another. |
| **Free and fair elections** | Regular, transparent, and competitive elections with universal adult suffrage. |
| **Political pluralism** | Multiple parties, interest groups, and civil society organizations can compete and influence policy. |
| **Civil liberties** | Freedom of speech, assembly, press, and religion are protected. |
| **Accountability mechanisms** | Media, courts, audits, and public oversight bodies hold officials responsible. |
### 2. How the System Works Day‑to‑Day
| Stage | Key Actors | Typical Actions | Checks & Balances |
|-------|------------|-----------------|-------------------|
| **Policy agenda setting** | Citizens, NGOs, lobbyists, media | Raise issues, mobilize public opinion | Media scrutiny, public debate |
| **Legislative process** | Parliament/Consentus | Draft, debate, amend, vote on bills | Committee reviews, opposition scrutiny, public hearings |
| **Executive implementation** | President/Prime Minister, ministries | Enact laws, administer programs, negotiate treaties | Parliamentary oversight, budget approvals |
| **Judicial review** | Courts | Interpret laws, adjudicate disputes, strike down unconstitutional acts | Judicial independence, appellate review |
| **Election cycle** | Electoral commissions, political parties, voters | Campaigns, voting, vote counting | Observation missions, recounts, legal challenges |
| **Public accountability** | Media, watchdog NGOs, audit institutions | Investigate corruption, expose misconduct | Freedom of information laws, whistleblower protections |
### 3. Modern Democratic Innovations
| Innovation | How It Enhances Democracy |
|------------|---------------------------|
| **Direct democracy tools** (referenda, citizen initiatives) | Gives citizens a direct voice on specific policies. |
| **Digital platforms
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
vorkna/gemma-3
|
vorkna
| 2025-08-13T02:46:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:46:20Z
|
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vorkna
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts
|
AmanPriyanshu
| 2025-08-13T02:45:57Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:45:10Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (25 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 25 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~16.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 25 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 25 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 78.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 25 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 25 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science that seeks to build systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Typical Human Skill | AI Approach |
|------|---------------------|-------------|
| Recognizing faces | Visual perception | Computer vision, deep learning |
| Understanding language | Natural language comprehension | Natural‑language processing (NLP) |
| Playing chess | Strategic planning | Search algorithms, reinforcement learning |
| Driving a car | Perception + decision making | Sensor fusion + control systems |
| Diagnosing disease | Pattern recognition + medical knowledge | Machine learning + knowledge bases |
In short, **AI is about giving machines the ability to learn from data, reason about the world, and act autonomously or semi‑autonomously.**
---
## How AI Works – The Core Ingredients
| Component | What it does | Typical Techniques |
|-----------|--------------|--------------------|
| **Data** | The raw material from which models learn | Images, text, sensor streams, logs |
| **Representation** | How we encode knowledge | Feature vectors, embeddings, symbolic rules |
| **Learning** | Adjusting a model to fit data | Supervised, unsupervised, reinforcement learning |
| **Inference** | Using the model to make predictions or decisions | Forward/backward propagation, Monte‑Carlo search |
| **Actuation** | Executing actions in the real world | Robotics, API calls, recommendation engines |
| **Feedback** | Updating the system based on outcomes | Online learning, human‑in‑the‑loop |
Below is a step‑by‑step view of a typical AI pipeline:
1. **Collect Data**
*Example:* A self‑driving car collects LiDAR, camera, GPS, and IMU data.
2. **Pre‑process & Feature‑Engineer**
*Example:* Convert raw sensor streams into a set of numerical features (e.g., distance to nearest obstacle).
3. **Choose a Model**
*Example:* A convolutional neural network (CNN) for image classification, a recurrent neural network (RNN) for time‑series, or a transformer for language.
4. **Train the Model**
*Example:* Use stochastic gradient descent to minimize a loss function (e.g., cross‑entropy for classification).
5. **Validate & Test**
*Example:* Evaluate on
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain, everyday language**
---
### 1. Two main ideas
| **Special Relativity** | **General Relativity** |
|------------------------|------------------------|
| Deals with objects moving at constant speeds (especially near the speed of light). | Deals with gravity and how it bends space and time. |
---
### 2. Special Relativity (1905)
1. **Speed of light is the same for everyone**
- No matter how fast you’re moving, light always travels at the same speed (≈ 300 000 km/s).
- This is like a “speed limit” that can’t be broken.
2. **Time and space are not absolute**
- If you’re moving fast, time for you ticks slower compared to someone standing still.
- Lengths shrink in the direction of motion.
- Think of a moving train: a clock on the train runs slower, and a ruler on the train looks a bit shorter to an outside observer.
3. **Mass and energy are interchangeable**
- The famous equation \(E = mc^2\) says that a tiny amount of mass can be converted into a huge amount of energy (because \(c^2\) is enormous).
**Why it matters:**
- Explains why GPS satellites need tiny corrections.
- Explains why particles in accelerators can reach enormous energies.
- Gives the foundation for modern physics (quantum field theory, etc.).
---
### 3. General Relativity (1915)
1. **Gravity is not a “force” but a curvature of space‑time**
- Imagine a heavy ball on a rubber sheet. The sheet sags around the ball.
- Smaller balls rolling nearby follow curved paths because the sheet is bent.
- In the universe, massive objects (planets, stars, black holes) bend the “fabric” of space‑time, and other objects move along those curves.
2. **Time runs slower where gravity is stronger**
- Near a massive planet, clocks tick a little slower than clocks far away.
- This is why GPS satellites (far from Earth’s gravity) run a tiny bit faster than clocks on the ground.
3. **Light follows the curves**
- Light rays bend around massive objects.
- This is why stars appear slightly shifted when their
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functioning.<|end|><|start|>assistant<|channel|>final<|message|>**Democracy in Modern Governments: How It Functions Today**
Modern democracies are built on a set of shared principles—rule of law, political pluralism, and the protection of individual rights—but the exact mechanisms that bring those principles to life vary widely from country to country. Below is a concise, step‑by‑step overview of the key components that make a contemporary democratic system work, followed by a quick comparison of the most common models.
---
## 1. The Core Pillars
| Pillar | What It Means | Typical Institutional Forms |
|--------|---------------|------------------------------|
| **Popular Sovereignty** | The people are the ultimate source of political authority. | Universal, free, and secret elections; referenda; citizen‑initiated legislation. |
| **Rule of Law** | All actions by the state are bound by law, and laws are applied equally. | Independent judiciary; constitutional courts; legal codes that protect rights. |
| **Political Pluralism** | Multiple parties and viewpoints can compete for power. | Multi‑party systems; proportional or mixed electoral systems; campaign finance regulations. |
| **Civil Liberties & Human Rights** | Freedom of expression, assembly, religion, and privacy. | Constitutional guarantees; ombudsman offices; human‑rights commissions. |
| **Accountability & Transparency** | Public officials must explain and justify their actions. | Freedom‑of‑information laws; open‑budget systems; anti‑corruption agencies. |
---
## 2. The Everyday Mechanics
| Mechanism | How It Works | Typical Examples |
|-----------|--------------|------------------|
| **Elections** | Citizens vote for representatives or directly for policy. | Presidential, parliamentary, or local elections; proportional representation; ranked‑choice voting. |
| **Legislative Process** | Laws are drafted, debated, amended, and passed by elected bodies. | Bicameral parliaments; committee hearings; public consultations. |
| **Executive Function** | The elected head of state (president, prime minister) implements laws. | Cabinet appointments; executive orders; policy agendas. |
| **Judicial Review** | Courts interpret laws and can strike down unconstitutional acts. | Constitutional courts; appellate courts; judicial oversight of administrative actions. |
| **Civil Society & Media** | NGOs, unions, and the press hold the government accountable. | Freedom of the press; watchdog NGOs; public protests. |
| **Checks &
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
John6666/nova-reality-xl-illustrious-v60-sdxl
|
John6666
| 2025-08-13T02:45:04Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"fantasy",
"hentai",
"landscape",
"coloring",
"details",
"realism",
"prompt adherence",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:39:47Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- fantasy
- hentai
- landscape
- coloring
- details
- realism
- prompt adherence
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- Laxhar/noobai-XL-1.1
- OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://civitai.com/models/453428/nova-reality-xl?modelVersionId=2108298).
This model created by [Crody](https://civitai.com/user/Crody).
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755052980
|
0xGareeb
| 2025-08-13T02:44:59Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:44:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755052995
|
IvanJAjebu
| 2025-08-13T02:44:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:44:14Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
obiwan001/roadwork273
|
obiwan001
| 2025-08-13T02:43:20Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-05T15:59:44Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BinBashir/Naija-bert_on_jumia_dataset
|
BinBashir
| 2025-08-13T02:42:12Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T02:41:13Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fader2077/q4_k_m
|
fader2077
| 2025-08-13T02:41:32Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T02:40:01Z
|
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fader2077
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
p1necone/powerv1-16bit
|
p1necone
| 2025-08-13T02:39:46Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:39:40Z
|
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** p1necone
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/nova-anime3d-xl-v40-sdxl
|
John6666
| 2025-08-13T02:39:44Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"3D",
"3DCG",
"CG",
"2.5D",
"digital art",
"gacha",
"doll",
"fantasy",
"landscape",
"colorful",
"details",
"posing",
"PVC / Gacha vibe",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:32:20Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- 3D
- 3DCG
- CG
- 2.5D
- digital art
- gacha
- doll
- fantasy
- landscape
- colorful
- details
- posing
- PVC / Gacha vibe
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1518336/nova-anime3d-xl?modelVersionId=2108249).
This model created by [Crody](https://civitai.com/user/Crody).
|
AmanPriyanshu/gpt-oss-14.9b-specialized-all-pruned-moe-only-22-experts
|
AmanPriyanshu
| 2025-08-13T02:37:34Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:36:52Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (22 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 22 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~14.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 22 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 22 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 68.8% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 22 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 22 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.9b-specialized-all-pruned-moe-only-22-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.9b-specialized-all-pruned-moe-only-22-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.9b-specialized-all-pruned-moe-only-22-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.9b-specialized-all-pruned-moe-only-22-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how it works.<|end|><|start|>assistant<|channel|>final<|message|>## What Is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science that builds systems capable of performing tasks that normally require human intelligence. These tasks include:
| Human‑like ability | Typical AI task |
|--------------------|----------------|
| Perception (seeing, hearing) | Computer vision, speech recognition |
| Reasoning (solving problems, making decisions) | Planning, game‑playing, medical diagnosis |
| Learning (improving from experience) | Machine learning, deep learning |
| Language (understanding, generating text) | Natural‑language processing |
| Interaction (communicating, collaborating) | Chatbots, virtual assistants |
AI is **not** a single technology but a collection of methods, algorithms, and architectures that together enable machines to act “intelligently.”
---
## How AI Works – The Big Picture
1. **Data Collection & Representation**
- AI systems need data: images, text, sensor readings, etc.
- Data is transformed into a format the algorithm can use (vectors, tensors, graphs).
2. **Model Selection**
- Choose a computational model that can learn from the data.
- Common families:
* **Statistical models** (regression, Bayesian networks)
* **Symbolic models** (logic, rule‑based systems)
* **Connectionist models** (neural networks)
* **Hybrid models** (combining the above)
3. **Training / Learning**
- The model adjusts internal parameters to minimize a *loss function* (error between prediction and truth).
- **Supervised learning**: labeled data (e.g., “cat” vs. “dog”).
- **Unsupervised learning**: discover patterns without labels (e.g., clustering).
- **Re‑inforcement learning**: learn by trial‑and‑error with rewards.
4. **Inference / Decision Making**
- Once trained, the model receives new input and produces an output (classification, prediction, action).
- In some systems, the output is a *policy* that tells the system what to do next.
5. **Evaluation & Feedback**
- Measure performance using metrics (accuracy, precision, recall, F1, etc.).
- If performance is unsatisfactory, iterate: gather more data, tweak the model, or try a
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain, everyday language**
---
### 1. Two main ideas
| **Special Relativity** | **General Relativity** |
|------------------------|------------------------|
| Deals with objects moving at high speeds (close to the speed of light). | Deals with gravity and how it bends space and time. |
---
### 2. Special Relativity – “Time and Space are Flexible”
1. **Speed of Light is a Cosmic Speed‑Limit**
- Light always travels at the same speed (≈ 300 000 km/s) no matter who’s looking at it.
- Nothing can go faster than light.
2. **Time Slows Down When You Move Fast**
- Imagine you’re on a very fast train. To someone standing on the platform, your watch ticks slower.
- The farther you get to the speed of light, the more your time “stretches.”
- This is called **time dilation**.
3. **Length Shrinks in the Direction of Motion**
- A moving object looks shorter in the direction it’s moving.
- This is **length contraction**.
4. **No Absolute “Now”**
- Two observers moving relative to each other can disagree about when an event happens.
- There isn’t a single, universal “now” that everyone shares.
**Bottom line:** If you’re moving very fast, your clock runs slower and distances in front of you shrink. But you’ll still see the same physics – the same laws of motion, the same speed of light.
---
### 3. General Relativity – “Gravity is Curved Space‑Time”
1. **Space‑Time is a Fabric**
- Think of space and time as a stretchy sheet.
- Massive objects (planets, stars, black holes) dent this sheet.
2. **Gravity is the Result of the Dents**
- A ball rolls toward the dent because the sheet is curved.
- In the universe, objects “roll” toward each other because the space‑time around them is curved.
3. **Light Follows the Curved Sheet**
- Light rays bend around massive objects (gravitational lensing).
- This is why distant galaxies can appear magnified or distorted.
4. **Time Runs Differently Where Gravity is Strong**
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain democratic functions, institutions, processes.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “Democracy” Means Today
Modern democracies are not a single, one‑style system; they are a family of institutions and practices that share a few core principles:
| Core Principle | What It Looks Like in Practice |
|----------------|------------------------------------|
| **Popular Sovereignty** | The people are the ultimate source of political legitimacy. |
| **Rule of Law** | All citizens, including leaders, are bound by the same laws. |
| **Political Pluralism** | Multiple parties, interest groups, and civil‑society actors compete for influence. |
| **Free and Fair Elections** | Regular, secret, and inclusive voting that genuinely reflects the will of the electorate. |
| **Separation of Powers** | Executive, legislative, and judicial branches are distinct and can check one another. |
| **Civil Liberties** | Freedom of speech, assembly, press, and religion are protected. |
| **Accountability & Transparency** | Governments must explain their actions and be open to scrutiny. |
These principles are the scaffolding that modern democratic governments use to organize themselves.
---
### 2. Institutional Architecture
| Branch | Key Functions | Typical Checks |
|--------|---------------|----------------|
| **Executive** (President/Prime‑Minister & Cabinet) | Implements laws, runs day‑to‑day administration, foreign policy | Veto power, impeachment, parliamentary confidence |
| **Legislature** (Parliament/ Congress) | Drafts, debates, and passes laws | Oversight hearings, budget control, confirmation of appointments |
| **Judiciary** (Courts) | Interprets laws, protects rights, resolves disputes | Judicial review, appellate review, independence from politics |
| **Electoral Bodies** (Independent commissions) | Organize elections, enforce campaign finance rules | Audits, dispute resolution |
| **Civil‑Society & Media** | Represent interests, hold power accountable | Investigative journalism, NGOs, public protests |
---
### 3. The Everyday Life of a Democratic State
1. **Policy Formation**
* Legislators propose bills → committee reviews → public hearings → debate → vote.
* The executive can propose legislation, but it must be approved by the legislature.
2. **Checks & Balances**
* A law passed by the legislature can be vetoed by the executive (often overridden by a super‑majority
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.