modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
SmallDoge/Doge-40M-MoE-checkpoint
|
SmallDoge
| 2025-08-13T02:36:20Z
| 55
| 0
|
transformers
|
[
"transformers",
"safetensors",
"doge",
"text-generation",
"conversational",
"en",
"zh",
"dataset:SmallDoge/SmallCorpus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T09:56:47Z
|
---
library_name: transformers
license: apache-2.0
datasets:
- SmallDoge/SmallCorpus
language:
- en
- zh
pipeline_tag: text-generation
---
# **Doge 40M MoE checkpoint**
Doge uses `wsd_scheduler` as the training scheduler, which divides the learning rate into three stages: `warmup`, `stable`, and `decay`. It allows us to continue training on any new dataset from any checkpoint in the `stable stage` without spikes in training.
Here are the initial learning rates required to continue training at each checkpoint:
- [Doge-40M](https://huggingface.co/SmallDoge/Doge-40M-checkpoint): 8e-3
- **[Doge-40M-MoE](https://huggingface.co/SmallDoge/Doge-40M-MoE-checkpoint): 8e-3**
| Model | Learning Rate | Schedule | Warmup Steps | Stable Steps |
|-------|---------------|----------|--------------|--------------|
| [Doge-40M](https://huggingface.co/SmallDoge/Doge-40M-checkpoint) | 8e-3 | wsd_scheduler | 2000 | 4000 |
| [Doge-40M-MoE](https://huggingface.co/SmallDoge/Doge-40M-MoE-checkpoint) | 8e-3 | wsd_scheduler | 2000 | 4000 |
|
AmanPriyanshu/gpt-oss-14.3b-specialized-all-pruned-moe-only-21-experts
|
AmanPriyanshu
| 2025-08-13T02:34:59Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:34:18Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (21 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 21 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~14.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 21 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 21 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 65.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 21 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 21 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-all-pruned-moe-only-21-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-all-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-all-pruned-moe-only-21-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-all-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Artificial Intelligence (AI)** is a field of computer science that builds systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Typical Human Skill | AI Approach |
|------|---------------------|-------------|
| Recognizing faces | Visual pattern recognition | Deep‑learning neural networks (CNNs) |
| Translating language | Understanding grammar & context | Statistical models, neural language models |
| Driving a car | Perception, decision‑making, motor control | Sensor fusion + reinforcement learning |
| Diagnosing disease | Integrating symptoms, tests, experience | Probabilistic inference, machine learning |
---
### 1. **Core Idea**
AI systems learn from data or rules, then generalize to new situations. The learning can be:
- **Rule‑based** (expert systems, logic programming)
- **Statistical** (probability models, Bayesian inference)
- **Learning‑based** (machine learning, deep learning)
---
### 2. **How It Works – A High‑Level Flow**
1. **Data Collection**
Gather examples of the problem domain (images, text, sensor readings, etc.).
2. **Representation**
Convert raw data into a format the algorithm can process (feature vectors, embeddings, etc.).
3. **Learning / Training**
- **Supervised**: model learns mapping from input → labeled output.
- **Unsupervised**: model discovers structure (clustering, auto‑encoding).
- **Re‑inforcement**: model learns actions that maximize a reward signal.
4. **Model Architecture**
- **Neural Networks**: layers of weighted connections (deep learning).
- **Decision Trees / Random Forests**: hierarchical splits.
- **Probabilistic Graphs**: Bayesian networks.
- **Symbolic Systems**: logic rules, ontologies.
5. **Optimization**
Adjust model parameters (weights, probabilities) to minimize a loss function (error, likelihood, etc.) using algorithms like gradient descent, back‑propagation, or evolutionary search.
6. **Inference / Deployment**
Apply the trained model to new data to produce predictions, classifications, or actions.
7. **Feedback & Adaptation**
Continuously update the model with new data or user feedback (online learning, fine‑tuning).
---
### 3. **Key Technologies**
| Technology | What It
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. Two main ideas
| **Special Relativity** | **General Relativity** |
|------------------------|------------------------|
| Deals with *moving objects* (like a fast train or a spaceship). | Deals with *gravity* and how it bends space. |
---
### 2. Special Relativity – “Time and space are linked”
1. **Speed of light is a limit**
Light always travels at the same speed (≈ 300 000 km/s). Nothing can go faster.
2. **Moving clocks run slower**
If you’re moving very fast, the clocks you carry tick a little slower than clocks that stay still.
*Why?* Because the distance light has to travel inside the moving clock is a bit longer.
3. **Length gets shorter**
A fast‑moving object appears a little shorter in the direction of motion.
*Why?* Light has to travel a longer path to reach the far end of the object.
4. **“Same” events look different**
Two events that happen at the same place for one observer may happen at different places for another moving observer.
*Why?* Because space and time mix together – they’re not separate.
**Take‑away:** In a world where the speed of light is fixed, moving changes how we measure time and distance. Nothing can outrun light, and the farther you go, the more “slower” your clock looks.
---
### 3. General Relativity – “Gravity is curved space”
1. **Mass bends space**
Think of space as a stretchy rubber sheet.
- A heavy ball (planet, star) pressed onto the sheet makes a dip.
- The dip is the *gravitational field*.
2. **Objects follow the dip**
A ball rolling on the sheet will slide toward the dip.
In the universe, planets, moons, and even light follow the curved paths created by mass.
3. **Light bends too**
Light rays don’t travel in straight lines near massive objects; they curve around them.
This explains why we see *gravitational lensing* (e.g., a galaxy’s light bent by a foreground galaxy).
4. **No “force” in the old sense**
Einstein replaced the idea
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain democratic functions, institutions, processes.<|end|><|start|>assistant<|channel|>final<|message|>**Democracy in the modern world is not a single, one‑style system but a set of principles and practices that together create a government in which power ultimately rests with the people.**
Below is a practical “road‑map” of how those principles are translated into everyday governance in contemporary states.
---
## 1. Core Democratic Principles
| Principle | What it means | How it shows up in practice |
|-----------|----------------|------------------------------|
| **Popular sovereignty** | The people are the ultimate source of authority. | Elections, referenda, public consultations. |
| **Rule of law** | Everyone, including leaders, is subject to the law. | Independent judiciary, constitutional supremacy. |
| **Separation of powers** | Legislative, executive, and judicial branches are distinct and check each other. | Parliament, cabinet, courts. |
| **Political pluralism** | Multiple parties and viewpoints compete for power. | Multi‑party elections, freedom of association. |
| **Civil‑rights protection** | Freedom of speech, assembly, religion, etc. | Constitutional guarantees, human‑rights courts. |
| **Transparency & accountability** | Citizens can see how decisions are made and hold leaders responsible. | Freedom of information laws, audits, whistle‑blower protections. |
---
## 2. Institutional Architecture
| Institution | Role | Typical Democratic Features |
|--------------|------|---------------------------|
| **Legislature** | Makes laws, represents citizens | Majority‑based elections, proportional representation, committee hearings, public debates. |
| **Executive** | Implements laws, runs day‑to‑day governance | Head of state elected or appointed by legislature, cabinet accountable to parliament, term limits. |
| **Judiciary** | Interprets laws, protects rights | Independent judges, appellate system, judicial review of executive actions. |
| **Electoral Commission** | Oversees elections | Non‑partisan, transparent vote‑counting, dispute resolution. |
| **Civil‑Service** | Executes policy | Merit‑based hiring, career progression, oversight by elected bodies. |
| **Media & Civil Society** | Watchdog & voice | Press freedom, NGOs, public forums, citizen‑initiated petitions. |
---
## 3. The Electoral Process
1. **Candidate nomination**
* Parties register candidates; independent candidates must meet thresholds (signatures, fees).
2. **Campaigning**
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755050880
|
kojeklollipop
| 2025-08-13T02:33:32Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:33:29Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-13.7b-specialized-all-pruned-moe-only-20-experts
|
AmanPriyanshu
| 2025-08-13T02:32:27Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:31:47Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (20 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 20 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~13.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 20 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 20 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 62.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 20 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 20 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.7b-specialized-all-pruned-moe-only-20-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.7b-specialized-all-pruned-moe-only-20-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-13.7b-specialized-all-pruned-moe-only-20-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.7b-specialized-all-pruned-moe-only-20-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What is Artificial Intelligence?
Artificial Intelligence (AI) is a field of computer science that focuses on building systems—usually software running on computers—that can perform tasks that normally require human intelligence. These tasks include:
| Task | Human Intelligence Needed | AI Equivalent |
|------|---------------------------|--------------|
| Recognizing faces | Visual perception | Computer vision |
| Translating languages | Linguistic understanding | Natural‑language processing |
| Playing chess | Strategic planning | Game‑playing algorithms |
| Diagnosing disease | Medical knowledge | Clinical decision support |
| Driving a car | Perception, decision‑making | Autonomous driving |
In short, **AI is the study and creation of machines that can learn, reason, and act in ways that mimic or surpass human cognition.**
---
## How Does AI Work? (A High‑Level View)
AI systems are built from a few core ingredients that work together:
| Ingredient | What it does | Typical Technology |
|------------|--------------|---------------------|
| **Data** | The raw “experience” the system learns from | Images, text, sensor readings, etc. |
| **Representation** | A way to encode data so a computer can manipulate it | Vectors, graphs, tensors |
| **Learning Algorithm** | Adjusts the system’s internal parameters to fit the data | Gradient descent, back‑propagation |
| **Inference Engine** | Uses the learned model to answer new questions | Forward‑propagation, rule‑based engines |
| **Evaluation** | Measures how well the system performs | Accuracy, loss functions, cross‑validation |
Below is a step‑by‑step outline of how most modern AI systems (especially deep learning models) are built and used.
---
### 1. **Collecting and Preparing Data**
- **Data Collection**: Gather a large, representative set of examples (e.g., thousands of labeled images of cats and dogs).
- **Pre‑processing**: Clean the data, normalize it, and transform it into a format the algorithm can use (e.g., converting pixel values to a 0‑1 range).
- **Feature Extraction** (optional): Identify useful attributes (e.g., edges, colors) that help the model focus on relevant information.
### 2. **Choosing a Model Architecture**
- **Rule‑Based Systems**: Hand‑crafted logic (e.g., if‑then rules). Good for narrow tasks with clear rules.
- **Stat
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
Imagine you’re on a train that’s moving at a constant speed. If you look at the world outside, it seems to be moving *against* you. But if you’re inside the train, you feel at rest. The same idea applies to how we measure space and time, but on a much bigger scale.
---
### 1. **Two parts of the theory**
| Part | What it says | Everyday picture |
|------|--------------|-------------------|
| **Special Relativity** | *“The laws of physics look the same no matter how fast you’re moving.”* | If you’re on a fast jet, the clocks inside the jet tick the same way as on the ground. |
| **General Relativity** | *“Gravity is just a way that massive objects bend the fabric of space‑time.”* | A heavy planet warps the space around it, making objects near it move differently than far away. |
---
### 2. **Key ideas**
| Idea | Simple explanation |
|------|---------------------|
| **Space and time are one thing (space‑time)** | Think of a 4‑dimensional sheet. Moving forward in time is like moving along the sheet. |
| **Speed changes how you see the world** | The faster you go, the more “slanted” the sheet looks. Things that are normally simultaneous can appear out of order. |
| **Light moves at a fixed speed (c)** | Nothing can go faster than light. That speed is the same for everyone, no matter where you are. |
| **Mass curves space‑time** | A big mass (planet, star) pulls the sheet around it, making paths bend. That’s what we feel as gravity. |
---
### 3. **Why it matters**
* **Time dilation** – A clock on a fast spaceship ticks *slower* than a clock on Earth.
* **Gravitational redshift** – Light that climbs away from a massive body gets *slower* (redder).
* **Black holes** – Extremely massive objects curve space‑time so much that even light can’t escape.
---
### 4. **A quick “thought experiment”**
1. **Twin Paradox** – Two identical twins: one stays on Earth, the other travels at near‑light speed. When the traveler returns, they’re
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain democratic functions, institutions, processes.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “Democracy” Means Today
In contemporary politics “democracy” is usually understood as a **plurality‑based, rule‑by‑law system** in which the people are the ultimate source of political authority. Modern democracies combine several core features:
| Feature | What It Looks Like in Practice | Why It Matters |
|--------|---------------------------------|----------------|
| **Free, fair elections** | Regular, competitive contests for office, with universal adult sufficiency, secret ballots, and independent electoral bodies | Gives the populace a direct, periodic voice |
| **Rule of law** | Constitution, statutes, and courts that bind everyone, including the government | Prevents arbitrary rule and protects minorities |
| **Separation of powers** | Distinct branches (executive, legislative, judicial) with checks and balances | Keeps power from concentrating in one office |
| **Civil liberties** | Freedom of speech, press, assembly, religion, and privacy | Enables dissent, debate, and accountability |
| **Independent institutions** | Electoral commissions, judiciary, ombudsman, audit offices | Provides impartial oversight |
| **Pluralism & minority rights** | Legal protection for ethnic, religious, gender, and other minorities | Ensures that majority rule does not become tyranny |
| **Transparency & accountability** | Open records, freedom of information, anti‑corruption laws | Keeps officials answerable to citizens |
These elements are not a checklist but a **framework** that modern governments adapt to their own histories, cultures, and institutions.
---
### 2. How the Framework Works in Practice
#### 2.1 The Legislative Branch
* **Parliament/ Congress** – elected representatives debate, amend, and pass laws.
* **Committee System** – specialized groups scrutinize bills, budgets, and executive actions.
* **Budget & Oversight** – the legislature controls public spending and can hold the executive accountable through hearings, audits, and, in some systems, impeachment.
#### 2.2 The Executive Branch
* **Head of State / Government** – elected (presidential) or chosen by the legislature (parliamentary).
* **Cabinet & Ministries** – implement laws, manage public services, and propose new legislation.
* **Executive Accountability** – must answer to the legislature, submit annual reports, and can be removed by a vote of no confidence or impeachment.
#### 2.
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
honggeun/test1
|
honggeun
| 2025-08-13T02:32:18Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T02:02:21Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** honggeun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rene-contango/cc8054a8-7d65-45d1-b554-34bfc8d8d140
|
rene-contango
| 2025-08-13T02:30:52Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:24:22Z
|
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: cc8054a8-7d65-45d1-b554-34bfc8d8d140
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for cc8054a8-7d65-45d1-b554-34bfc8d8d140
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rene-contango/cc8054a8-7d65-45d1-b554-34bfc8d8d140", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755052176
|
IvanJAjebu
| 2025-08-13T02:30:40Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:30:30Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zstanjj/HTML-Pruner-Phi-3.8B
|
zstanjj
| 2025-08-13T02:30:10Z
| 969
| 14
|
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:2411.02959",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T03:57:38Z
|
---
language:
- en
library_name: transformers
base_model: microsoft/Phi-3.5-mini-instruct
license: apache-2.0
---
## Model Information
We release the HTML pruner model used in **HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieval Results in RAG Systems**.
<p align="left">
Useful links: 📝 <a href="https://arxiv.org/abs/2411.02959" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/papers/2411.02959" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/plageon/HtmlRAG" target="_blank">Github</a>
</p>
We propose HtmlRAG, which uses HTML instead of plain text as the format of external knowledge in RAG systems. To tackle the long context brought by HTML, we propose **Lossless HTML Cleaning** and **Two-Step Block-Tree-Based HTML Pruning**.
- **Lossless HTML Cleaning**: This cleaning process just removes totally irrelevant contents and compress redundant structures, retaining all semantic information in the original HTML. The compressed HTML of lossless HTML cleaning is suitable for RAG systems that have long-context LLMs and are not willing to loss any information before generation.
- **Two-Step Block-Tree-Based HTML Pruning**: The block-tree-based HTML pruning consists of two steps, both of which are conducted on the block tree structure. The first pruning step uses a embedding model to calculate scores for blocks, while the second step uses a path generative model. The first step processes the result of lossless HTML cleaning, while the second step processes the result of the first pruning step.
🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HtmlRAG)** to support us. Your star means a lot!
## 📦 Installation
Install the package using pip:
```bash
pip install htmlrag
```
Or install the package from source:
```bash
pip install -e .
```
---
## 📖 User Guide
### 🧹 HTML Cleaning
```python
from htmlrag import clean_html
question = "When was the bellagio in las vegas built?"
html = """
<html>
<head>
<h1>Bellagio Hotel in Las</h1>
</head>
<body>
<p class="class0">The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
</body>
<div>
<div>
<p>Some other text</p>
<p>Some other text</p>
</div>
</div>
<p class="class1"></p>
<!-- Some comment -->
<script type="text/javascript">
document.write("Hello World!");
</script>
</html>
"""
#. alternatively you can read html files and merge them
# html_files=["/path/to/html/file1.html", "/path/to/html/file2.html"]
# htmls=[open(file).read() for file in html_files]
# html = "\n".join(htmls)
simplified_html = clean_html(html)
print(simplified_html)
# <html>
# <h1>Bellagio Hotel in Las</h1>
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# <div>
# <p>Some other text</p>
# <p>Some other text</p>
# </div>
# </html>
```
### 🔧 Configure Pruning Parameters
The example HTML document is rather a short one. Real-world HTML documents can be much longer and more complex. To handle such cases, we can configure the following parameters:
```python
# Maximum number of words in a node when constructing the block tree for pruning with the embedding model
MAX_NODE_WORDS_EMBED = 10
# MAX_NODE_WORDS_EMBED = 256 # a recommended setting for real-world HTML documents
# Maximum number of tokens in the output HTML document pruned with the embedding model
MAX_CONTEXT_WINDOW_EMBED = 60
# MAX_CONTEXT_WINDOW_EMBED = 6144 # a recommended setting for real-world HTML documents
# Maximum number of words in a node when constructing the block tree for pruning with the generative model
MAX_NODE_WORDS_GEN = 5
# MAX_NODE_WORDS_GEN = 128 # a recommended setting for real-world HTML documents
# Maximum number of tokens in the output HTML document pruned with the generative model
MAX_CONTEXT_WINDOW_GEN = 32
# MAX_CONTEXT_WINDOW_GEN = 4096 # a recommended setting for real-world HTML documents
```
### 🌲 Build Block Tree
```python
from htmlrag import build_block_tree
block_tree, simplified_html = build_block_tree(simplified_html, max_node_words=MAX_NODE_WORDS_EMBED)
# block_tree, simplified_html = build_block_tree(simplified_html, max_node_words=MAX_NODE_WORDS_GEN, zh_char=True) # for Chinese text
for block in block_tree:
print("Block Content: ", block[0])
print("Block Path: ", block[1])
print("Is Leaf: ", block[2])
print("")
# Block Content: <h1>Bellagio Hotel in Las</h1>
# Block Path: ['html', 'title']
# Is Leaf: True
#
# Block Content: <div>
# <p>Some other text</p>
# <p>Some other text</p>
# </div>
# Block Path: ['html', 'div']
# Is Leaf: True
#
# Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# Block Path: ['html', 'p']
# Is Leaf: True
```
### ✂️ Prune HTML Blocks with Embedding Model
```python
from htmlrag import EmbedHTMLPruner
embed_model="BAAI/bge-large-en"
query_instruction_for_retrieval = "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: "
embed_html_pruner = EmbedHTMLPruner(embed_model=embed_model, local_inference=True, query_instruction_for_retrieval = query_instruction_for_retrieval)
# alternatively you can init a remote TEI model, refer to https://github.com/huggingface/text-embeddings-inference.
# tei_endpoint="http://YOUR_TEI_ENDPOINT"
# embed_html_pruner = EmbedHTMLPruner(embed_model=embed_model, local_inference=False, query_instruction_for_retrieval = query_instruction_for_retrieval, endpoint=tei_endpoint)
block_rankings=embed_html_pruner.calculate_block_rankings(question, simplified_html, block_tree)
print(block_rankings)
# [2, 0, 1]
#. alternatively you can use bm25 to rank the blocks
from htmlrag import BM25HTMLPruner
bm25_html_pruner = BM25HTMLPruner()
block_rankings = bm25_html_pruner.calculate_block_rankings(question, simplified_html, block_tree)
print(block_rankings)
# [2, 0, 1]
from transformers import AutoTokenizer
chat_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-70B-Instruct")
pruned_html = embed_html_pruner.prune_HTML(simplified_html, block_tree, block_rankings, chat_tokenizer, MAX_CONTEXT_WINDOW_EMBED)
print(pruned_html)
# <html>
# <h1>Bellagio Hotel in Las</h1>
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# </html>
```
### ✂️ Prune HTML Blocks with Generative Model
```python
from htmlrag import GenHTMLPruner
import torch
# construct a finer block tree
block_tree, pruned_html = build_block_tree(pruned_html, max_node_words=MAX_NODE_WORDS_GEN)
# block_tree, pruned_html = build_block_tree(pruned_html, max_node_words=MAX_NODE_WORDS_GEN, zh_char=True) # for Chinese text
for block in block_tree:
print("Block Content: ", block[0])
print("Block Path: ", block[1])
print("Is Leaf: ", block[2])
print("")
# Block Content: <h1>Bellagio Hotel in Las</h1>
# Block Path: ['html', 'title']
# Is Leaf: True
#
# Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
# Block Path: ['html', 'p']
# Is Leaf: True
ckpt_path = "zstanjj/HTML-Pruner-Phi-3.8B"
if torch.cuda.is_available():
device="cuda"
else:
device="cpu"
gen_embed_pruner = GenHTMLPruner(gen_model=ckpt_path, device=device)
block_rankings = gen_embed_pruner.calculate_block_rankings(question, pruned_html, block_tree)
print(block_rankings)
# [1, 0]
pruned_html = gen_embed_pruner.prune_HTML(pruned_html, block_tree, block_rankings, chat_tokenizer, MAX_CONTEXT_WINDOW_GEN)
print(pruned_html)
# <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
```
---
## Results
- **Results for [HTML-Pruner-Phi-3.8B](https://huggingface.co/zstanjj/HTML-Pruner-Phi-3.8B) and [HTML-Pruner-Llama-1B](https://huggingface.co/zstanjj/HTML-Pruner-Llama-1B) with Llama-3.1-70B-Instruct as chat model**.
| Dataset | ASQA | HotpotQA | NQ | TriviaQA | MuSiQue | ELI5 |
|------------------|-----------|-----------|-----------|-----------|-----------|-----------|
| Metrics | EM | EM | EM | EM | EM | ROUGE-L |
| BM25 | 49.50 | 38.25 | 47.00 | 88.00 | 9.50 | 16.15 |
| BGE | 68.00 | 41.75 | 59.50 | 93.00 | 12.50 | 16.20 |
| E5-Mistral | 63.00 | 36.75 | 59.50 | 90.75 | 11.00 | 16.17 |
| LongLLMLingua | 62.50 | 45.00 | 56.75 | 92.50 | 10.25 | 15.84 |
| JinaAI Reader | 55.25 | 34.25 | 48.25 | 90.00 | 9.25 | 16.06 |
| HtmlRAG-Phi-3.8B | **68.50** | **46.25** | 60.50 | **93.50** | **13.25** | **16.33** |
| HtmlRAG-Llama-1B | 66.50 | 45.00 | **60.75** | 93.00 | 10.00 | 16.25 |
---
## 📜 Citation
```bibtex
@misc{tan2024htmlraghtmlbetterplain,
title={HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems},
author={Jiejun Tan and Zhicheng Dou and Wen Wang and Mang Wang and Weipeng Chen and Ji-Rong Wen},
year={2024},
eprint={2411.02959},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2411.02959},
}
```
|
Devil371/blockassist-bc-rapid_rangy_barracuda_1755050404
|
Devil371
| 2025-08-13T02:27:45Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rapid rangy barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:27:03Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rapid rangy barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts
|
AmanPriyanshu
| 2025-08-13T02:27:30Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:26:53Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (18 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 18 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~12.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 18 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 18 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 56.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 18 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 18 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, basics, models, training, inference.<|end|><|start|>assistant<|channel|>final<|message|>## 1. What is Artificial Intelligence?
Artificial Intelligence (AI) is a field of computer science that studies how to build machines—usually software—that can **perform tasks that are normally associated with human intelligence**.
These tasks include:
| Human‑like ability | Typical AI task | Example |
|---|---|---|
| Perception | Recognizing objects in images | Detecting faces in a photo |
| Reasoning | Solving puzzles, planning routes | Chess, navigation |
| Learning | Improving performance over time | A model that gets better at predicting stock prices |
| Language | Understanding and generating text | ChatGPT, machine translation |
| Decision‑making | Choosing actions under uncertainty | Autonomous driving, medical diagnosis |
AI is **not** a single technology; it is a collection of algorithms, models, and engineering practices that let computers *simulate* aspects of human cognition.
---
## 2. How Does AI Work? (A High‑Level View)
1. **Data Collection**
AI systems need data to learn or make decisions. This can be images, text, sensor readings, or any structured information.
2. **Representation**
Raw data is transformed into a form that a computer can process.
* Example: an image is turned into a grid of pixel values; a sentence is turned into a sequence of word embeddings.
3. **Modeling**
A mathematical model (often a neural network) is built to capture patterns in the data.
* Example: a convolutional neural network (CNN) learns to detect edges, shapes, and eventually objects in images.
4. **Training / Learning**
The model’s parameters (weights) are adjusted so that its output matches known correct answers.
* Example: a supervised learning algorithm receives a labeled dataset (e.g., “cat” vs. “dog”) and adjusts weights to minimize prediction error.
5. **Inference / Deployment**
Once trained, the model can process new, unseen data and produce predictions or actions.
* Example: a trained language model generates a reply to a user’s question.
6. **Evaluation & Iteration**
The system’s performance is measured (accuracy, loss, etc.). If it’s not good enough, more data, a different architecture, or hyper‑parameter tuning is tried.
---
## 3. Core AI Techniques
| Technique
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. Two main ideas
| **What** | **Why it matters** |
|---|---|
| **Speed of light is the same everywhere** | Light travels at a fixed speed (about 300 000 km/s). That speed is the “speed of light” and it can’t be changed by anything. |
| **Time and space are linked** | How fast something moves and how far it moves are not independent. If you’re moving very fast, the way you experience time and space changes. |
---
### 2. The “Newtonian” view (before Einstein)
- **Space**: a fixed, straight grid.
- **Time**: ticks the same for everyone, no matter where or how fast you’re moving.
- **Speed**: just distance ø time.
This works well for everyday life, but it breaks down when you look at very fast objects (like light, particles, or planets).
---
### 3. Einstein’s first insight – *Relative Motion*
- **“Relative”** means “compared to something else.”
- If you’re on a moving train, a person on the platform sees you moving. But if you’re on the train, you see the platform moving.
- The same idea applies to light: the speed of light is the same for everyone, no matter who is measuring it.
---
### 4. The *First* Theory – **Special Relativity** (1905)
| **Key Points** | **What it says** |
|---|---|
| **Speed of light is constant** | Light always moves at the same speed, no matter who sends it. |
| **Time “slows down” for fast objects** | If you travel close to the speed of light, your clock ticks slower compared to a clock at rest. |
| **Space “shrinks” for fast objects** | A distance that looks normal to a slow observer looks shorter to a very fast observer. |
*Why?* Because the universe is built on the idea that light is the fastest messenger. Anything that can’t beat light must obey its speed.
---
### 5. The *Second* Theory – **General Relativity** (1914–1918)
- **Gravity is not a force, it’s geometry.**
- Imagine the universe as a flexible sheet
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functions, institutions, processes, checks, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Democracy in the modern world is not a single, one‑thing system but a set of inter‑related institutions, practices, and norms that together create a “rule‑by‑the‑people” order.**
Below is a practical, step‑by‑step look at how those pieces fit together in contemporary governments.
---
## 1. The Core Pillars
| Pillar | What it means | How it shows up in a modern state |
|--------|--------------|----------------------------------------|
| **Popular sovereignty** | The people are the ultimate source of political authority. | Elections, referendums, public consultations. |
| **Rule of law** | Laws are made, applied, and interpreted by independent institutions. | Constitutions, courts, legislations, regulatory agencies. |
| **Separation of powers** | Executive, legislative, and judicial branches operate independently. | Parliament/ Congress, President/Prime‑Minister, Supreme Court. |
| **Checks & balances** | Each branch can limit the power of the others. | Judicial review, legislative oversight, executive veto. |
| **Political pluralism** | Multiple parties and viewpoints compete for power. | Multi‑party elections, freedom of association, media pluralism. |
| **Civil‑political rights** | Individuals can participate freely and express themselves. | Freedom of speech, assembly, voting rights, anti‑discrimination laws. |
| **Political accountability** | Leaders are answerable to the public. | Elections, audits, whistle‑blower laws, independent media. |
---
## 2. How the Pieces Work Together
### 2.1. The Legislative Process
1. **Drafting** – A member of parliament (MP) or a committee writes a bill.
2. **First Reading** – Bill is introduced; no debate.
3. **Committee Stage** – Detailed scrutiny, amendments, expert testimony.
4. **Second Reading** – Debate on the bill’s main points; further amendments.
5. **Third Reading** – Final debate; the bill is voted on.
6. **Royal/Presidential assent** – In constitutional monarchies or presidential systems, the head of state signs the bill into law.
7. **Judicial review** – Courts can strike down laws that violate the constitution or fundamental rights.
### 2.2. The Executive
* **Policy agenda** –
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
SmallDoge/Doge-40M-MoE
|
SmallDoge
| 2025-08-13T02:25:04Z
| 17
| 0
|
transformers
|
[
"transformers",
"safetensors",
"doge",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T09:55:19Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/mikomic-anime-v10-superdetailed-sdxl
|
John6666
| 2025-08-13T02:24:53Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"lighting",
"backgrounds",
"detail",
"highly adaptable",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:19:43Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- lighting
- backgrounds
- detail
- highly adaptable
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1861617?modelVersionId=2107745).
This model created by [Riwer](https://civitai.com/user/Riwer).
|
AmanPriyanshu/gpt-oss-11.4b-specialized-all-pruned-moe-only-16-experts
|
AmanPriyanshu
| 2025-08-13T02:22:49Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:22:15Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (16 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 16 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~11.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 16 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 16 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 50.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 16 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 16 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-11.4b-specialized-all-pruned-moe-only-16-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-all-pruned-moe-only-16-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-11.4b-specialized-all-pruned-moe-only-16-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-all-pruned-moe-only-16-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, definition, components, how it works.<|end|><|start|>assistant<|channel|>final<|message|>## 1. What is Artificial Intelligence?
Artificial Intelligence (AI) is a field of computer science that seeks to build machines—whether software, robots, or other devices—that can perform tasks that were once thought to be the exclusive domain of humans. The “intelligence” part comes from the ability to:
| **Observe** – gather data from sensors, text, images, etc. |
| **Reason** – process that data, find patterns, make inferences. |
| **Act** – produce an output (a decision, a movement, a recommendation). |
In practice, AI is a collection of algorithms, models, and hardware that let a system learn from experience, adapt to new situations, and solve problems that are too complex for simple rule‑based programming.
---
## 2. Core Ingredients of AI
| **Data** | The raw input that the system receives (images, text, sensor readings, etc.). |
| **Representation** | A way to encode that data so a machine can manipulate it (e.g., a vector, a graph, a neural‑network layer). |
| **Processing Engine** | The algorithm or model that transforms the representation into useful information (e.g., a classifier, a planner). |
| **Evaluation / Feedback** | A metric or signal that tells the system whether it did well (e.g., a score, a human rating). |
| **Learning Mechanism** | A method that updates the processing engine based on feedback (e.g., gradient‑based learning, reinforcement learning). |
---
## 3. How AI Works – A Step‑by‑Step View
1. **Input → Representation**
*Example:* A camera captures a photo → the pixels are fed into a convolutional neural network (CNN) that turns them into a 3‑dimensional “feature vector.”
2. **Representation → Processing**
*Example:* The CNN passes the feature vector through a series of layers that compute a probability distribution over possible object classes.
3. **Processing → Output**
*Example:* The system outputs “cat” with 95 % confidence.
4. **Output → Evaluation**
*Example:* A human judge says the label is correct → the system receives a positive reward.
5. **Evaluation → Learning**
*Example:* The reward is used to adjust the weights of the CNN
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What Einstein was trying to solve
When people were studying how objects move, they had two big puzzles:
1. **Speed of light vs. speed of a moving object** – If a light beam is shot from a moving car, does it travel faster than a beam shot from a stationary car?
2. **Time vs. distance** – If you travel very fast (close to the speed of light), does time “tick” differently for you than for someone who stays on Earth?
Einstein’s work answered both questions.
---
### 2. Two parts of the theory
| **Part** | **What it says** | **Why it matters** |
|---|---|---|
| **Galilean (Classical) Relativity** | The laws of motion (e.g., “if you push a ball, it moves”) are the same everywhere, no matter where you are. | This is the old idea that physics is the same on Earth, on a moon, or in a spaceship. |
| **General Relativity** | The universe is a 4‑dimensional space: 3 dimensions of space + 1 of time. The way objects move and how time flows are linked to the *shape* of this space. | It explains gravity, black holes, and why distant stars look slightly “off” (gravitational lensing). |
---
### 3. The key ideas in a nutshell
1. **Time is not a fixed clock**
* In Einstein’s view, time is a *coordinate* in the same way that distance is.
* If you travel very fast (close to the speed of light), the “distance” between two points in space changes, and so the *time* between two events changes too.
* That’s why astronauts on a very fast spacecraft would age a fraction of a second less than their Earth‑bound friends.
2. **Space is not a flat sheet**
* Imagine the universe as a flexible sheet that can bend.
* Gravity is the bending of this sheet: a massive object (like a planet or a black hole) pulls the sheet toward it, making the “distance” between points smaller.
* This bending changes how light travels, which is why we see the same star in slightly different places when we look from different angles.
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functions, institutions, processes, checks, etc.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “democracy” means today
In contemporary politics “democracy” is not a single recipe but a set of **principles** that shape how power is exercised:
| Principle | Typical Institutional/Procedural Expression |
|---|---|
| **Popular sovereignty** | The people (or their elected representatives) hold ultimate authority. |
| **Rule of law** | Laws are made, interpreted, and enforced by independent institutions. |
| **Separation of powers** | Executive, legislative, and judicial branches are distinct and can check one other. |
| **Pluralism & competition** | Multiple political parties, civil‑society groups, and media outlets compete for influence. |
| **Transparency & accountability** | Public information, audits, and oversight mechanisms keep officials answerable. |
| **Participation** | Elections, referenda, public consultations, and civic engagement. |
| **Protection of rights** | Constitutional guarantees, independent courts, and human‑rights institutions. |
These principles are woven into the **constitutional architecture** of a state, and they are reinforced by **political culture** (norms, traditions, civic education) and by **civil‑society institutions** (NGOs, media, academia).
---
### 2. Core democratic institutions in modern governments
| Institution | Core Functions | Typical Checks & Balances |
|---|---|---|
| **Constitution** | Codifies the distribution of power, rights, and procedures. | Judicial review, constitutional courts. |
| **Parliament / Legislature** | Debates, drafts, and passes laws; represents the electorate. | Parliamentary oversight of the executive, committees, and public hearings. |
| **Executive (President / Prime‑Minister)** | Implements laws, runs the state, and often has veto or emergency powers. | Parliamentary approval of budgets, impeachment procedures, judicial review of executive acts. |
| **Judiciary** | Interprets laws, adjudises disputes, protects rights. | Independent appointment processes, judicial oversight, appellate review. |
| **Electoral System** | Conducts free, fair, and regular elections. | Independent electoral commissions, international observers, legal remedies for electoral disputes. |
| **Civil‑Society & Media** | Provide information, critique, and mobilize citizens. | Freedom of expression, press‑law protections, watchdog NGOs. |
| **Public‑Sector
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755050157
|
mang3dd
| 2025-08-13T02:22:46Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:22:43Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts
|
AmanPriyanshu
| 2025-08-13T02:20:33Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:20:02Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (15 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 15 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~10.8B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 15 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 15 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 46.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 15 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 15 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, definition, components, how it works.<|end|><|start|>assistant<|channel|>final<|message|>## 1. What is Artificial Intelligence?
Artificial Intelligence (AI) is a field of computer science that seeks to build systems capable of **intelligent behavior**—that is, the ability to learn, reason, and act in a way that mimises human cognition.
In practice, AI systems are built from a combination of:
| Component | What it does | Typical technology |
|---|---|---|
| **Sensors / Input** | Gathers raw data (text, images, sound, etc.) | Cameras, microphones, text encoders |
| **Representation** | Turns raw data into a machine‑friendly format | Neural‑network layers, symbolic models, graphs |
| **Processing / Reasoning** | Manipulates the representation to produce insights | Algorithms, inference engines, pattern‑matching |
| **Learning** | Improves the system over time | Machine‑learning, reinforcement learning, deep‑learning |
| **Output / Action** | Communes or acts on the insight | Speech, text, images, robot motion |
The “intelligence” comes from the system’s ability to **learn from experience** (data) and **generalize** that learning to new situations.
---
## 2. How AI Works – A Step‑by‑Step Overview
Below is a simplified pipeline that most AI systems follow. The exact details vary by domain (e.g., language, vision, robotics), but the core ideas are the same.
### 1. **Data Acquisition**
- **Input**: Sensors or pre‑existing data (e.g., a sentence, a photo, a video clip).
- **Encoding**: Convert the raw signal into a digital format (e.g., bytes, pixels, tokens).
### 2. **Representation**
- **Modeling**: Build a structure that captures the essential features of the input.
- *Symbolic models*: Grammar trees for language, logic gates for circuits.
- *Statistical models*: Feature vectors, probability distributions.
- *Neural models*: Layers of neurons that learn patterns.
### 3. **Processing / Reasoning**
- **Inference**: Apply rules or learned patterns to the representation.
- *Rule‑based*: If‑then logic, Boolean networks.
- *Statistical*: Bayesian inference, expectation maximization.
- *Neural*: Conference of layers, attention mechanisms.
### 4. **Learning**
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What the theory is about
Einstein’s work is split into two main ideas:
| **Part** | **What it says** | **Why it matters** |
|---|---|---|
| **Classical (First) Relativity** | The speed of a moving object is measured relative to a reference point. | It explains why a car looks faster when you’re on a moving train. |
| **Quantum (Second) Relativity** | Space and time are not fixed; they can change depending on how fast something moves, especially near very fast (near‑light‑speed) objects. | It shows that the universe is a “soup” of particles that can behave in ways we can’t see with normal eyes. |
---
### 2. The “Classical” part – *Relative speed*
- **Speed is relative**: If you’re on a car that’s moving at 100 km/h, a car that’s moving at 200 km/h looks like it’s going 100 km/h faster.
- **No absolute speed**: There is no single “speed” that exists everywhere; it always depends on who’s point of reference.
---
### 3. The “Quantum” part – *Space, time, and light*
- **Particles can be “fuzzy”**: In the quantum world, particles can be in many places at the same time.
- **Light is a particle**: Light isn’t a single, fixed thing; it’s a stream of particles that can move at the speed of light (≈ 300 000 km/s).
- **Time can bend**: Near a very fast object (like a super‑fast spaceship or a black hole), the flow of time can change. Things that happen in the future can appear to happen sooner or later.
---
### 4. How it changed our view
- **Gravity is not a force**: Einstein showed that gravity is a *field* that pulls objects because of the way space itself is arranged.
- **The universe is a “wave”**: The universe is full of waves (light, sound, etc.) that carry information.
- **New physics**: The theory opened the way to quantum mechanics, explaining why atoms behave in ways that classical physics can’t predict.
---
### 5. Everyday
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain how democracy functions in modern governments. Provide overview of institutions, processes, checks, representation, elections, rule of law, etc.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What “democracy” means today
In contemporary politics “democracy” is not a single, fixed model. It is a **set of practices and institutions** that allow citizens to influence the political process, while also protecting minority rights, the rule of law, and the stability of the state. Most modern democrans share a few core features:
| Feature | Typical expression in a modern state | Why it matters |
|---|---|---|
| **Representative institutions** (parliament, council, etc.) | Legislators elected by proportional or district‑based ballots | Gives a voice to the electorate in law‑making |
| **Free, competitive elections** | Regular, multi‑party contests with universal suff suff | Determines who will hold office |
| **Rule of law & separation of powers** | Judiciary independent, executive & legislature distinct | Prevents abuse of power |
| **Civil‑society participation** | NGOs, media, protests, public consultation | Allows citizens to shape policy beyond ballots |
| **Checks & balances** | Constitutional limits, oversight bodies, judicial review | Keeps power in check |
| **Protection of minorities** | Quotas, affirmative‑action laws, constitutional safeguards | Prevents tyranny of the majority |
These elements can be arranged in different ways (parliamentary vs. presidential, federal vs. unitary, etc.), but the combination of them is what modern democrans call “democracy.”
---
### 2. How the system works in practice
#### 2.1 The electoral cycle
1. **Campaigning** – Parties and candidates present platforms, debate, and advertise.
2. **Voting** – Citizens cast ballots (paper, electronic, absentee).
3. **Counting & certification** – Election commissions verify results.
4. **Transition** – Newly elected officials assume office, often after a “hand‑over” period.
The electoral cycle is the most visible democratic mechanism, but it is only one layer of governance.
#### 2.2 Representative bodies
- **Parliament / Congress**: Drafts laws, debates policy, and holds the executive accountable.
- **Executive (President, Prime Minister, etc.)**: Implements laws, runs the state, and is usually elected or appointed by the legislature.
- **
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755051498
|
IvanJAjebu
| 2025-08-13T02:19:43Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:19:17Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/mikomic-anime-v10-anime-sdxl
|
John6666
| 2025-08-13T02:19:41Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"lighting",
"backgrounds",
"detail",
"highly adaptable",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:14:16Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- lighting
- backgrounds
- detail
- highly adaptable
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1861617?modelVersionId=2106959).
This model created by [Riwer](https://civitai.com/user/Riwer).
|
AXERA-TECH/AXCL
|
AXERA-TECH
| 2025-08-13T02:17:19Z
| 3
| 1
| null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2025-03-06T08:37:26Z
|
---
license: bsd-3-clause
---
https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_setup.html#id2
|
AmanPriyanshu/gpt-oss-9.6b-specialized-all-pruned-moe-only-13-experts
|
AmanPriyanshu
| 2025-08-13T02:16:11Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:15:41Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (13 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 13 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 13 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 40.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-all-pruned-moe-only-13-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-all-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-all-pruned-moe-only-13-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-all-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI definition, types, how it works: data, models, training, inference, neural networks, etc.<|end|><|start|>assistant<|channel|>final<|message|>## 1. What is Artificial Intelligence?
Artificial Intelligence (AI) is a set of computational techniques that let machines *think*—that is, that they can observe data, learn from it, and then use that knowledge to solve problems, make predictions, or generate new content. In short, AI is a **machine‑learning system** that can:
| Feature | Example |
|--------|--------|
| **Pattern recognition** | Spoting a face in a photo, a stock price trend, or a word in a sentence |
| **Generalization** | Infitting a rule that applies to many unseen examples |
| **Reasoning** | Choosing the best answer to a question, or the best route to a destination |
| **Creativity** | Writing a poem, composing a melody, or designing a new product |
AI is *not* a single algorithm; it is a family of methods that share the same goal: **to emulate human intelligence** in a purely digital medium.
---
## 2. How AI Works – The Core Ingredients
| Step | What Happens | Why It Matters |
|------|--------------|----------------|
| **1. Data Collection** | Gather a large set of observations (images, text, sensor readings, etc.) | Provides the “experience” the machine needs to learn from |
| **2. Representation** | Convert data into a machine‑friendly format (e.g., tensors, graphs, or embeddings) | Makes the data machine‑readable and comparable |
| **3. Model Building** | Build a hypothesis‑based structure (a *model*) that can explain the patterns in the data | Gives the machine a *theory* to test |
| **4. Training** | Feed the model with data and let it adjust its internal parameters (weights) so it best predicts unseen data | The model learns from the data, just like a human learning from experience |
| **5. Validation** | Test the model on a new dataset that the machine never saw before | Checks if the model truly generalized, not just memorized |
| **6. Deployment** | Use the trained model to answer new questions or solve tasks in real time | The machine now *acts* intelligently |
### 2.1 The Building Blocks
| Block | What Is It | How It Works |
|-------|------------|
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What the theory says
- **Space and time are not fixed.**
In the universe, the distance between two points and the speed of a moving object can change depending on the *direction* in which they are measured.
- **The same thing can look different from different viewpoints.**
If you look at a particle from one side of the universe, it may seem to move slowly. If you look from another side, it may seem to move fast. The “speed” you measure depends on the *angle* you’re looking from.
- **The universe is a “relativ” (relative) playground.**
The laws that we use on Earth (like “if two objects collide, they exchange energy”) still hold, but the *amount* of energy and the *speed* of objects can shift when you change the viewpoint.
---
### 2. How it works – a quick analogy
Imagine a **two‑dimensional world** (like a flat sheet of paper).
- **Horizontal axis (x)**: Think of this as “time.”
- **Vertical axis (y)**: Think of this as “space.”
If you draw a straight line from the top to the bottom of the sheet, you’re looking along the *vertical* direction.
If you draw a straight line from the left to the right, you’re looking along the *horizontal* direction.
Now, suppose you tilt the sheet at a 45° angle.
- The line that was straight vertically is now *tilted*; it’s no longer a straight line in the sheet’s geometry.
- The same line that was straight horizontally is also tilted.
In relativity, the *tilting* of the sheet is like the *direction* in space. The “straightness” (or “straightness” of a line) is like the *speed* of an object. When you tilt the sheet, the straightness changes – just like how the speed of an object changes when you look from a different direction.
---
### 3. Key ideas
| Concept | Everyday picture | Relativity twist |
|---------|-------------------|-------------------|
| **Speed** | A car moving at 60 km/h | The car’s speed can be 60 km/h *or* 120 km/h depending on the angle
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain concept, institutions, checks, participation, etc.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What is “democracy” in a modern state?
In practice, a democracy is a system in which **the people have the ultimate authority** over the state’s laws, policies, and leaders. The authority is exercised through a set of institutions and procedures that are designed to be:
| **Key Feature** | **How it works in practice** | **Why it matters** |
|-------------------|---------------------------|---------------------|
| **Representative elections** | Citizens vote for elected officials (party‑based or independent). | Gives a clear mandate for the government. |
| **Rule of the law** | Laws are made, interpreted, and enforced by independent bodies. | Protects rights and limits power. |
| **Checks & balances** | Separation of powers (executive, legislative, judiciary). | Prevents concentration of power. |
| **Freedom of expression & assembly** | Citizens can speak, write, protest, and organize. | Enables scrutiny and accountability. |
| **Transparency & accountability** | Public records, audits, media scrutiny. | Keeps officials answerable to the public. |
| **Pluralism & minority rights** | Legal guarantees for minority groups, plural parties. | Ensures no single group dominates. |
These are the “building blocks” that modern democries use to keep the system functioning.
---
### 2. How the system actually operates
#### 2.1 Representative institutions
1. **Parliament / Congress**
*Elected members debate, draft laws, and vote on policy.*
*They are the “voice” of the electorate, but they are also subject to scrutiny by the press and civil‑rights groups.*
2. **Executive (President, Prime‑Minister, etc.)**
*Runs day‑to‑day policy, implements laws, and represents the state.*
*Must be elected (or appointed by parliament) and is accountable through elections, parliamentary oversight, and the media.*
3. **Judiciary**
*Interinterpres laws, adjudises disputes, and checks executive power.*
*Its independence is protected by separate appointment processes and legal safeguards.*
#### 2.2 Electoral mechanisms
| **Type** | **What it does** | **Typical safeguards** |
|----------|-----------------|------------------------|
| **Ball‑ball** | Simple, quick, often used in local
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mradermacher/OLMoE-1B-5B-i1-GGUF
|
mradermacher
| 2025-08-13T02:15:41Z
| 10
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:motionlabs/OLMoE-1B-5B",
"base_model:quantized:motionlabs/OLMoE-1B-5B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-23T14:13:10Z
|
---
base_model: motionlabs/OLMoE-1B-5B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/motionlabs/OLMoE-1B-5B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OLMoE-1B-5B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/OLMoE-1B-5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q4_0.gguf) | i1-Q4_0 | 2.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q4_1.gguf) | i1-Q4_1 | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMoE-1B-5B-i1-GGUF/resolve/main/OLMoE-1B-5B.i1-Q6_K.gguf) | i1-Q6_K | 3.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AmanPriyanshu/gpt-oss-9.0b-specialized-all-pruned-moe-only-12-experts
|
AmanPriyanshu
| 2025-08-13T02:14:03Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:13:35Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (12 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 12 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 12 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 12 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 37.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 12 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 12 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-all-pruned-moe-only-12-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-all-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.0b-specialized-all-pruned-moe-only-12-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.0b-specialized-all-pruned-moe-only-12-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI concept, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## 1. What is Artificial Intelligence?
Artificial Intelligence (AI) is a set of computational techniques that let machines “think” and “act” in ways that resemble human cognition. In short, AI is a **software system that can learn, reason, and make decisions** without a human brain.
| **Key Features** | **What Makes It “Artificial”** | **What Makes It “Intelligent”** |
|-------------------|---------------------------|---------------------------|
| **Learning** | Learges from data it has seen | Learges from patterns, not from a pre‑built rule set |
| **Reasoning** | Uses logic, inference, and prediction | Can generate new solutions to problems |
| **Generalization** | Works on a wide range of tasks | Adises to new contexts |
| **Autonomy** | Can act without explicit human control | Makes decisions that are useful or optimal |
AI is not a single technology; it is a **family of methods** that can be combined to build a system that can:
* **Recognize** patterns in images, text, or sound (image, speech, text recognition)
* **Predict** future events from past data (forecasting, recommendation)
* **Generate** new content (writing, music, art)
* **Plan** and **solve** problems (planning, optimization, game‑playing)
---
## 2. How AI Works – The Core Ingredients
### 2.1 Data
AI starts with a **dataset** – a collection of facts, images, or text. The data can be:
* **Structured** (labels, categories) – e.g., a labeled dataset of “cat” vs “dog” images.
* **Unstructured** (raw, noisy) – e.g., millions of tweets with no labels.
### 2.2 Representation
The data is turned into a **machine‑readable format**:
* **Text → Tokens** (words, characters)
* **Images → Pixels + Features** (color, shape, texture)
* **Audio → Frequencies** (pitch, timo)
### 2.3 Training
The system learns by **exposing it to the data** and letting it **try** to answer a question. The learning process is usually:
| **Method** | **What Happens** | **Why It Works** |
|------------|-----------------|-----------------|
| **Revised
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What the idea is
Einstein’s theory says that **the way we measure distances, time, and mass changes when objects move very fast or very far from each other**. In everyday physics we treat objects as if they are “fixed” in space, but when they are moving, especially over huge distances, that’s not true.
---
### 2. The two main ideas
| Concept | What we normally think | What Einstein says |
|--------|------------------------|------------------------|
| **Time** | “Time is the same everywhere.” | Time *depends* on the motion of an object. If something moves, the time it takes to travel changes. |
| **Mass** | “Mass is a fixed property of an object.” | Mass *depends* on the motion of an object. An object can appear heavier or lighter depending on where it is. |
---
### 3. How it works – the “clock” example
1. **Two clocks on a moving spaceship**
- Clock A is on the spaceship.
- Clock B is on a planet far away.
2. **If the spaceship is moving toward the planet**
- Clock A’s time ticks *slier* than Clock B’s time.
- The difference in the time between the two clocks is exactly the *speed* of the spaceship.
3. **If the spaceship is moving away from the planet**
- Clock A’s time ticks *fier* than Clock B’s time.
- The difference is again the speed, but now in the opposite direction.
So the *time difference* between two clocks tells us how fast the spaceship is moving. That’s the “time moves with the object” part.
---
### 4. Mass changes the same way
If we look at a *mass* (like a ball) on the spaceship:
- While the spaceship is moving toward the planet, the ball’s mass appears *smaller* than it would be on the planet.
- While the spaceship is moving away, the ball’s mass appears *larger*.
The change in mass is exactly the same as the change in time. That’s the “mass changes with the object” part.
---
### 5. Why it matters
- **Gravity**: The theory explains why a ball feels heavier when it
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain concept, institutions, checks, participation, representation, accountability, checks of power, etc.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What is “democracy” in today’s world?
In the 21‑st‑century, democracy is usually understood as a **system of government in which the people have the ultimate authority** over the state’s policies, laws, and leaders. The core ideas are:
| Feature | How it works in practice |
|--------|------------------------|
| **Representative** | Citizens elect elected officials (parliament, council, etc.) who act on the public’s will. |
| **Participative** | Citizens can influence policy through voting, petitions, public debate, protests, and civic engagement. |
| **Accountable** | Leaders are answerable to the public, through elections, oversight bodies, and the media. |
| **Rule‑of‑law** | Laws are made, interpreted, and enforced by independent institutions (cour courts, judiciary). |
| **Checks of power** | Separation of branches (executive, legislative, judiciary) and checks (epar, oversight, civil‑rights). |
### 2. How democracy shows itself in modern governments
| Institutional | What does it do? | Typical examples |
|----------------|----------------|----------------|
| **Parleg‑/Council** | Elects leaders, drafts laws, debates policy. | Parliament (UK, Australia), Senate (US), Legislative Assembly (India). |
| **Executive** | Implements policy, runs day‑of‑the‑state. | President/Prime‑Minister, Cabinet. |
| **Judiciary** | Interinterpres law, protects rights. | Courts, Supreme Court, Constitutional Review Board. |
| **Civil‑Rights Bodies** | Protect minority rights, ensure fairness. | Human‑Rights Commission, Ombudsman. |
| **Media & NGOs** | Inform the public, hold leaders in check. | Newspapers, watchdog groups. |
| **E‑polling & Digital Platforms** | Facilitate voting, debate, data‑analysis. | Online voting pilots, social‑media campaigns. |
### 3. The “checks” that keep democracy alive
1. **Separation of branches** – The executive, legislative, and judiciary are independent.
2. **Checks & balances** – Overswatch committees, parliamentary oversight, judicial review.
3. **Transparency** – Open‑data portals, public‑record of‑decations,
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
ade127/modernbert-llm-router
|
ade127
| 2025-08-13T02:12:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T09:11:00Z
|
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: modernbert-llm-router
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-llm-router
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0320
- F1: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.048 | 1.0 | 479 | 0.0369 | 0.9889 |
| 0.0296 | 2.0 | 958 | 0.0320 | 0.9927 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts
|
AmanPriyanshu
| 2025-08-13T02:11:59Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:11:34Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (11 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~8.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 11 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 11 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 34.4% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-all-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, definition, types, how it works: data, models, training, inference, neural networks, etc.<|end|><|start|>assistant<|channel|>final<|message|>## What Is Artificial Intelligence?
Artificial Intelligence (AI) is a set of computational systems that can **undercome** or **underperform** tasks that were traditionally performed by humans. In other words, AI is a machine‑learning system that can:
* **Observe** patterns in data (e.g., text, images, numbers, sounds).
* **Infer** relationships or predictions from those patterns.
* **Act** by generating new outputs (answers, predictions, actions) that are useful to humans.
AI is not a single algorithm or a single “brain”; it is a **collection of techniques** that can be combined to produce a system that behaves like a human mind in a specific domain.
---
## Core Ideas Behind AI
| Concept | What it does | How it works |
|--------|----------------|----------------|
| **Data** | Stores a set of facts (e.g., a corpus of text, a set of images, a set of numbers). | Usually in a **memory** (e.g., a database, a tensor‑array). |
| **Model** | A mathematical representation of the data that captures the relationships between data points. | Often a **probability‑based** or **functional‑based** model (e.g., a neural network, a decision‑tree). |
| **Training** | The process of learning the model from the data. | Reiterative: repeatedly expose the model to data and reward it for correct predictions. |
| **Inference** | Using the trained model to answer a new question or perform a new task. | Apply the model to new data and output the prediction. |
| **Evaluation** | Measure how well the model performs. | Compare predictions to ground truth or to human performance. |
---
## How AI Works in Practice
### 1. **Data Collection**
* **Input**: The system receives a set of inputs (text, images, numbers, audio, etc.).
* **Storage**: The inputs are stored in a memory (e.g., a database, a tensor array).
### 2. **Model Construction**
* **Mathematical Representation**: The data is represented by a **model**.
* **Neural Networks** (e.g., convolutional networks, recurrent‑state networks) for images/text.
* **Decision Trees**
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s theory of relativity** is a way of understanding how the universe works when we look at the motion of objects that are far apart from each point in space.
---
### 1. What we know before
* **Newton’s law of gravity** (the *law of inertia*):
If one object is already moving in a direction, any other object will move in the opposite direction, and the two objects will keep a constant speed relative to each other.
In other words, if a ball is moving to the right, another ball will move to the left, and the two balls will keep the same speed.
* **The *law of simultity*** (the *law of simultity*):
If two points are at the same distance from each other, the two points will see the same motion at the same time.
In other words, if two points are at the same distance, the two points will see the same motion at the same time.
These two laws are true when the points are **close** to each other (e.g., in a laboratory or on a planet). But they break when the points are **far** from each other (e.g., between a planet and a star).
---
### 2. What Einstein noticed
When we look at a *very far* distance, the two points do not see the same motion at the same time.
Instead, the motion that we see at one point is *different* from the motion we see at the other point.
This is because the two points are far enough that the *time* between the two points is *not the same*.
---
### 3. The new idea
Einstein’s theory says that:
* **The *law of simultity* is wrong for far distances.**
The two points do not see the same motion at the same time.
The motion we see at one point is *different* from the motion we see at the other point.
* **The *law of inertia* still works.**
Even when the points are far apart, if one point is moving in a direction, the other point will move in the opposite direction, and the two points will keep a constant speed relative to each other.
---
### 4. How it works
Imagine a *two‑way* universe:
* **Space** – the *distance* between two points.
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain role, institutions, checks, participation, checks, etc.<|end|><|start|>assistant<|channel|>final<|message|>### 1. The Core Idea of Democracy in Modern Governments
At the most level, democracy is the **“rule by the people”** principle. In modern states it is expressed by a set of mechanisms that let the citizenry shape the laws, the leaders, and the policies that govern them. These mechanisms are not a single thing; they are a **network of institutions, norms, and practices** that together keep the government accountable, responsive, and legitimate.
---
## 2. How Democracy Functions in Practice
| **Institution / Mechanism** | **What It Does** | **Typical Features** | **Why It Matters** |
|---|---|---|---|
| **Representative Elections** | Citizens choose leaders (political parties, candidates) who will run the government. | • Regular (e.g., 4‑year, 2‑year) elections<br>• Ball ballots, polling, campaign rules<br>• Electoral commissions, independent oversight | • Gives a mandate for policy decisions<br>• Creates a “check‑on” for leaders |
| **Parleg‑al Committees / Councors** | Elected representatives debate, draft, and vote on laws. | • Legislative chambers (parpar, parliament, senate) <br>• Commitors, caucies, committees<br>• Majority or majority‑plus‑minority rules | • Transensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensensens
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
SalmonAI123/whisper-small-vi
|
SalmonAI123
| 2025-08-13T02:11:10Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-12T16:55:04Z
|
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-vi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1870
- Wer: 21.6303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2573 | 0.32 | 200 | 0.2834 | 29.3170 |
| 0.2007 | 0.64 | 400 | 0.2387 | 22.8230 |
| 0.1962 | 0.96 | 600 | 0.2174 | 20.1278 |
| 0.137 | 1.28 | 800 | 0.2048 | 21.7818 |
| 0.126 | 1.6 | 1000 | 0.1947 | 22.1146 |
| 0.1261 | 1.92 | 1200 | 0.1870 | 21.6303 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Osrivers/realisticAmateurs_v10.safetensors
|
Osrivers
| 2025-08-13T02:10:46Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-12T20:36:24Z
|
---
license: creativeml-openrail-m
---
|
Jaewoo49/t5-small-Q4_K_M-GGUF
|
Jaewoo49
| 2025-08-13T02:08:47Z
| 0
| 0
| null |
[
"gguf",
"summarization",
"translation",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"base_model:google-t5/t5-small",
"base_model:quantized:google-t5/t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-08-13T02:08:45Z
|
---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
- llama-cpp
- gguf-my-repo
datasets:
- c4
base_model: google-t5/t5-small
---
# Jaewoo49/t5-small-Q4_K_M-GGUF
This model was converted to GGUF format from [`google-t5/t5-small`](https://huggingface.co/google-t5/t5-small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google-t5/t5-small) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jaewoo49/t5-small-Q4_K_M-GGUF --hf-file t5-small-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jaewoo49/t5-small-Q4_K_M-GGUF --hf-file t5-small-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jaewoo49/t5-small-Q4_K_M-GGUF --hf-file t5-small-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jaewoo49/t5-small-Q4_K_M-GGUF --hf-file t5-small-q4_k_m.gguf -c 2048
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755050813
|
IvanJAjebu
| 2025-08-13T02:08:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:07:53Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts
|
AmanPriyanshu
| 2025-08-13T02:08:06Z
| 0
| 0
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:07:46Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (9 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 9 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 9 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 28.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, definition, types, how it works via algorithms, data, learning, etc.<|end|><|start|>assistant<|channel|>final<|message|>### What is Artificial Intelligence?
Artificial Intelligence (AI) is a set of computer‑based systems that can perceive, reason, and act on a level of human‑like cognition. In simple terms, AI is a machine that can “think” by following a set of rules, learn from data, and make decisions that resemble human judgment.
| **Key concepts** | **What does AI do?** |
|--------------------------- |--------------------------- |
| **Representation** – AI stores information in a *state* (e.g., a number, a word, a picture). | It can “see” a picture, “read” a text, or “count” a number. |
| **Inference** – AI uses *rules* (logic) to ded infer from the state. | It can decide whether the picture is a cat or a dog. |
| **Learning** – AI can acquire new knowledge from data. | It can improve its performance over time. |
| **Action** – AI can produce an output (e.e. a recommendation, a route, a text). | It can answer a question or solve a puzzle. |
---
## How AI Works – The Core Architecture
1. **Input → Process → Output**
The whole cycle is a *pipeline* that turns data into a useful answer.
2. **Data (Input)**
• **Text** – sentences, paragraphs, or raw data.
• **Images** – pixels, shapes, or patterns.
• **Numbers** – sequences, probabilities, or statistics.
• **Sensors** – sound, touch, or visual signals.
3. **Representation**
• **Tokens** – a single piece of data that the system can hold.
• **State Space** – a collection of all possible states (e.g., “cat”, “dog”, “unknown”).
• **Prob‑Space** – a probability that each state is true.
4. **Inference Engine**
• **Logic** – a rule that tells the system how to choose a state.
• **Prob‑Inference** – the engine picks the most likely state.
• **Decision** – the system outputs the chosen state.
5. **Learning**
• **Training** – the system is fed a large set of data and learns the rule.
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need explain relativity in simple terms.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What the problem was
Imagine you have a **ball** that is moving around a **circle**.
If you look from the ball’s side, the circle looks to be **moving**.
If you look from the circle’s side, the ball looks to be **moving**.
In physics, the “ball” is a **particle** (like a proton, a photon, or a molecule).
The “circle” is the **background** that everything in the universe is made of the same kind of material – the **matter**.
The question is: **Does the particle or the background move?**
If they move, then the universe is made of the same kind of material (the same kind of “matter”).
If they don’t move, then the universe is made of different kinds of material (different “matter” and “energy”).
---
### 2. What Einstein said
Einstein’s idea is that **the particle and the background are made of the same kind of material**.
In other words, the universe is made of a **“one‑thing”** (call it *matter*).
Because of this, the particle can’t change its state without changing the background, and vice versa.
---
### 3. How it works
1. **Matter is the same** – the particle and the background are made of the same kind of material.
Think of a **magball** that is made of a **mag**.
The magball can’t change its shape unless the mag changes its shape.
2. **The particle’s motion is a change in the background** –
When the particle moves, it changes the background in a way that the background changes.
The background is not moving in the particle’s view, but it is moving in the particle’s view.
This is the **“rel”** (rel‑ = “rel‑”?).
3. **The background’s motion is a change in the particle** –
When the background moves, the particle changes in a way that the particle changes.
The particle is not moving in the background’s view, but it is moving in the background’s view.
Because the particle and the background are made of the same kind of material, the motion of the particle is **just
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain how democracy functions in modern governments. Provide overview, principles, institutions, checks, representation, accountability, deliberation.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What is “democracy” in a modern government?
In most 20‑th‑cent‑age states, democracy is not a single form of government. It is a set of **principal mechanisms** that give the people a say in the political process. The core ideas are:
| Principle | How it works in practice |
|------------|---------------------------|
| **Representation** | Citizens elect (or vote for) a leaders or a governing body that is supposed to act on their interests. |
| **Accountability** | Leaders are answerable to the public, through elections, courts, media, and civil‑rights bodies. |
| **Participation** | Citizens can vote, protest, petition, or run a campaign. |
| **Delivision** | Policy is debated, debated, and decided by a body of experts or the public. |
| **Checks & Balings** | No concentration of power; checks by courts, legislature, executive, and civil‑society. |
These are not “one‑thing” but a “whole‑thing” that is embedded in the modern state.
---
## 2. How democracy functions in modern governments
### 2.1 Electoral systems
* **Primary elections** – The people choose a leader or a governing body. In most countries, the electoral system is a **plural‑party** system, with a **two‑party** or **multi‑party** system. The electoral process is a **check** on the legitimacy of the leader.
* **Secondary elections** – The people can “re‑elect” a leader if the leader fails to deliver. This is a **check** on the accountability of the leader.
* **Electability of the public** – The public can vote on the policy positions of the candidate. The public can also vote on the policy positions of the candidate.
### 2.2 Parliamentary or legislative bodies
* **Parpar** – The people elect a representative body that is supposed to represent the public. The body is a **check** on the concentration of power.
* **Parpar** – The body is a **check** on the concentration of power.
* **Parpar** – The body is a **check** on the concentration of power.
### 2.3
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755049363
|
indoempatnol
| 2025-08-13T02:07:28Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:07:25Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/MrRobotoAI_A6-GGUF
|
tensorblock
| 2025-08-13T02:02:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:MrRobotoAI/A6",
"base_model:quantized:MrRobotoAI/A6",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T00:34:53Z
|
---
base_model: MrRobotoAI/A6
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## MrRobotoAI/A6 - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [MrRobotoAI/A6](https://huggingface.co/MrRobotoAI/A6).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [A6-Q2_K.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [A6-Q3_K_S.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
| [A6-Q3_K_M.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [A6-Q3_K_L.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [A6-Q4_0.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [A6-Q4_K_S.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [A6-Q4_K_M.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [A6-Q5_0.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [A6-Q5_K_S.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [A6-Q5_K_M.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [A6-Q6_K.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [A6-Q8_0.gguf](https://huggingface.co/tensorblock/MrRobotoAI_A6-GGUF/blob/main/A6-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MrRobotoAI_A6-GGUF --include "A6-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MrRobotoAI_A6-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755050217
|
hobson123
| 2025-08-13T02:02:41Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:02:26Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mveroe/Qwen2.5-1.5B_lightr1_3_EN_6144_1p0_0p0_1p0_sft
|
mveroe
| 2025-08-13T02:02:24Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T23:03:21Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-1.5B_lightr1_3_EN_6144_1p0_0p0_1p0_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-1.5B_lightr1_3_EN_6144_1p0_0p0_1p0_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0a0+5228986c39.nv25.06
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755049377
|
Sayemahsjn
| 2025-08-13T02:01:15Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:01:11Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/delustious-v10-sdxl
|
John6666
| 2025-08-13T02:00:39Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"semi-realistic",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T01:54:31Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- semi-realistic
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1848230/delustious?modelVersionId=2091602).
This model created by [mommymia](https://civitai.com/user/mommymia).
|
mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF
|
mradermacher
| 2025-08-13T02:00:05Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"large-language-models",
"DPO",
"direct-preference-optimization",
"reasoning",
"long-CoT",
"en",
"base_model:InfiX-ai/InfiAlign-Qwen-7B-DPO",
"base_model:quantized:InfiX-ai/InfiAlign-Qwen-7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T22:28:14Z
|
---
base_model: InfiX-ai/InfiAlign-Qwen-7B-DPO
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- large-language-models
- DPO
- direct-preference-optimization
- reasoning
- long-CoT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/InfiX-ai/InfiAlign-Qwen-7B-DPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InfiAlign-Qwen-7B-DPO-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/InfiAlign-Qwen-7B-DPO-GGUF
|
mradermacher
| 2025-08-13T02:00:05Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"large-language-models",
"DPO",
"direct-preference-optimization",
"reasoning",
"long-CoT",
"en",
"base_model:InfiX-ai/InfiAlign-Qwen-7B-DPO",
"base_model:quantized:InfiX-ai/InfiAlign-Qwen-7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T20:17:51Z
|
---
base_model: InfiX-ai/InfiAlign-Qwen-7B-DPO
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- large-language-models
- DPO
- direct-preference-optimization
- reasoning
- long-CoT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/InfiX-ai/InfiAlign-Qwen-7B-DPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InfiAlign-Qwen-7B-DPO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/InfiAlign-Qwen-7B-DPO-GGUF/resolve/main/InfiAlign-Qwen-7B-DPO.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
judsfdf/micfran_tranferencia_gpt_300
|
judsfdf
| 2025-08-13T01:59:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T01:59:12Z
|
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** judsfdf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts
|
AmanPriyanshu
| 2025-08-13T01:59:24Z
| 0
| 1
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T01:59:09Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (4 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 4 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~4.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 4 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 4 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 12.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 4 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 4 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>We need to explain what is artificial intelligence and how does it work. Provide explanation: AI is a system that uses data, etc. Provide details: It's a machine that uses algorithms, etc. Provide mention of the concept: It's a tool that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept: It's a system that uses the ability to compute. Provide mention of the concept:
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>We need to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relat relat in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms. The user wants to explain Einstein's theory of relativity in simple terms.
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>We need to answer: "How does democracy function in modern governments?" The user wants a question: "How does democracy function in modern governments?" They might be ambiguous. They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments?" They might be asking: "How does democracy function in modern governments
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
AmanPriyanshu/gpt-oss-3.0b-specialized-all-pruned-moe-only-2-experts
|
AmanPriyanshu
| 2025-08-13T01:56:14Z
| 0
| 1
| null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T01:56:01Z
|
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (2 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 2 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~3.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 2 |
| **Layers** | 24 |
| **Top-k Routing** | 2 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 2 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 6.2% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 2 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 2 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.0b-specialized-all-pruned-moe-only-2-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-all-pruned-moe-only-2-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-3.0b-specialized-all-pruned-moe-only-2-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-all-pruned-moe-only-2-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final <|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|>
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final]<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
John6666/cat-pony-ani-il-v21-dark-sdxl
|
John6666
| 2025-08-13T01:54:28Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"bondage",
"latex",
"body",
"lighting",
"ponygirl",
"detail",
"pony",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T01:44:38Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- bondage
- latex
- body
- lighting
- ponygirl
- detail
- pony
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/594973/catpony?modelVersionId=2107345).
This model created by [ggyydream](https://civitai.com/user/ggyydream).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755049959
|
IvanJAjebu
| 2025-08-13T01:53:55Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:53:40Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hejazizo/grpo-merged-checkpoint-594_2025-08-11_15-57
|
hejazizo
| 2025-08-13T01:51:55Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:hejazizo/merged-checkpoint-594",
"base_model:finetune:hejazizo/merged-checkpoint-594",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T19:57:44Z
|
---
base_model: hejazizo/merged-checkpoint-594
library_name: transformers
model_name: grpo-merged-checkpoint-594_2025-08-11_15-57
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for grpo-merged-checkpoint-594_2025-08-11_15-57
This model is a fine-tuned version of [hejazizo/merged-checkpoint-594](https://huggingface.co/hejazizo/merged-checkpoint-594).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hejazizo/grpo-merged-checkpoint-594_2025-08-11_15-57", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hejazizo-ali-pytopia/grpo-merged-checkpoint-594/runs/u1put54h)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755049696
|
IvanJAjebu
| 2025-08-13T01:49:21Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:49:12Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tester2k2/scBART-6
|
tester2k2
| 2025-08-13T01:41:40Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T01:41:36Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755048668
|
hobson123
| 2025-08-13T01:38:33Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:38:19Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/inclusionAI.ASearcher-Web-14B-GGUF
|
DevQuasar
| 2025-08-13T01:37:52Z
| 0
| 0
| null |
[
"gguf",
"text-generation",
"base_model:inclusionAI/ASearcher-Web-14B",
"base_model:quantized:inclusionAI/ASearcher-Web-14B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-12T23:58:11Z
|
---
base_model:
- inclusionAI/ASearcher-Web-14B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [inclusionAI/ASearcher-Web-14B](https://huggingface.co/inclusionAI/ASearcher-Web-14B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755048960
|
IvanJAjebu
| 2025-08-13T01:37:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:36:57Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jaewoo49/pythia-70m-Q4_K_M-GGUF
|
Jaewoo49
| 2025-08-13T01:36:32Z
| 0
| 0
|
gpt-neox
|
[
"gpt-neox",
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-70m",
"base_model:quantized:EleutherAI/pythia-70m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T01:36:30Z
|
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- llama-cpp
- gguf-my-repo
license: apache-2.0
datasets:
- EleutherAI/pile
library_name: gpt-neox
base_model: EleutherAI/pythia-70m
---
# Jaewoo49/pythia-70m-Q4_K_M-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-70m`](https://huggingface.co/EleutherAI/pythia-70m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-70m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jaewoo49/pythia-70m-Q4_K_M-GGUF --hf-file pythia-70m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jaewoo49/pythia-70m-Q4_K_M-GGUF --hf-file pythia-70m-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jaewoo49/pythia-70m-Q4_K_M-GGUF --hf-file pythia-70m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jaewoo49/pythia-70m-Q4_K_M-GGUF --hf-file pythia-70m-q4_k_m.gguf -c 2048
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755047420
|
kojeklollipop
| 2025-08-13T01:35:40Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:35:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755047433
|
lautan
| 2025-08-13T01:35:32Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:35:28Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-skills_in_rl_1e5_1epch_all_tasks_sft_zayne-sft
|
TAUR-dev
| 2025-08-13T01:32:45Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-13T01:31:17Z
|
# M-skills_in_rl_1e5_1epch_all_tasks_sft_zayne-sft
This model was created as part of the **skills_in_rl_1e5_1epch_all_tasks_sft_zayne** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: skills_in_rl_1e5_1epch_all_tasks_sft_zayne
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_skills_in_rl_1e5_1epch_all_tasks_sft_zayne_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-05, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__skills_in_rl_1e5_1epch_all_tasks_sft_zayne__v1", "sf_eval_before_training": false, "sf_wandb_project": "skills_in_rl_1e5_1epch_all_tasks_sft_zayne_sft", "sf_eval_steps": null, "run_name": "skills_in_rl_1e5_1epch_all_tasks_sft_zayne_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__skills_in_rl_1e5_1epch_all_tasks_sft_zayne__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-skills_in_rl_1e5_1epch_all_tasks_sft_zayne-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-skills_in_rl_1e5_1epch_all_tasks_sft_zayne-sft")
```
|
NexVeridian/Hunyuan-1.8B-Instruct-4bit
|
NexVeridian
| 2025-08-13T01:31:01Z
| 0
| 0
|
mlx
|
[
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-1.8B-Instruct",
"base_model:quantized:tencent/Hunyuan-1.8B-Instruct",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-12T22:45:48Z
|
---
library_name: mlx
tags:
- mlx
base_model: tencent/Hunyuan-1.8B-Instruct
pipeline_tag: text-generation
---
# NexVeridian/Hunyuan-1.8B-Instruct-4bit
This model [NexVeridian/Hunyuan-1.8B-Instruct-4bit](https://huggingface.co/NexVeridian/Hunyuan-1.8B-Instruct-4bit) was
converted to MLX format from [tencent/Hunyuan-1.8B-Instruct](https://huggingface.co/tencent/Hunyuan-1.8B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Hunyuan-1.8B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
cixzer/blockassist-bc-gregarious_long_cheetah_1755048518
|
cixzer
| 2025-08-13T01:30:46Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gregarious long cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:29:59Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gregarious long cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jaewoo49/distilgpt2-Q4_K_M-GGUF
|
Jaewoo49
| 2025-08-13T01:30:45Z
| 0
| 0
| null |
[
"gguf",
"exbert",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:openwebtext",
"base_model:distilbert/distilgpt2",
"base_model:quantized:distilbert/distilgpt2",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T01:30:39Z
|
---
language: en
tags:
- exbert
- llama-cpp
- gguf-my-repo
license: apache-2.0
datasets:
- openwebtext
co2_eq_emissions: 149200
base_model: distilbert/distilgpt2
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: WikiText-103
type: wikitext
metrics:
- type: perplexity
value: 21.1
name: Perplexity
---
# Jaewoo49/distilgpt2-Q4_K_M-GGUF
This model was converted to GGUF format from [`distilbert/distilgpt2`](https://huggingface.co/distilbert/distilgpt2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/distilbert/distilgpt2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jaewoo49/distilgpt2-Q4_K_M-GGUF --hf-file distilgpt2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jaewoo49/distilgpt2-Q4_K_M-GGUF --hf-file distilgpt2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jaewoo49/distilgpt2-Q4_K_M-GGUF --hf-file distilgpt2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jaewoo49/distilgpt2-Q4_K_M-GGUF --hf-file distilgpt2-q4_k_m.gguf -c 2048
```
|
wadawith/blockassist-bc-stocky_quick_bobcat_1755048465
|
wadawith
| 2025-08-13T01:30:21Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stocky quick bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:30:03Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stocky quick bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pictgensupport/vintagejapanesematchbooks
|
pictgensupport
| 2025-08-13T01:25:32Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T01:25:27Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vintagejapanesematchbooks
---
# Vintagejapanesematchbooks
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vintagejapanesematchbooks` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/vintagejapanesematchbooks', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7
|
BootesVoid
| 2025-08-13T01:25:17Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T01:25:14Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DANNYCERVANTES05
---
# Cme25Kwxg0Bmdgwtc9Nm621M8_Cme9977P904Ezrts8Gvzwmyn7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DANNYCERVANTES05` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DANNYCERVANTES05",
"lora_weights": "https://huggingface.co/BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7', weight_name='lora.safetensors')
image = pipeline('DANNYCERVANTES05').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7/discussions) to add images that show off what you’ve made with this LoRA.
|
dragostom24/Deepseek_FT
|
dragostom24
| 2025-08-13T01:23:56Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-instruct",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-08-13T01:23:05Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/deepseek_metrics_plot.png
text: '-'
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
instance_prompt: null
license: unknown
---
# dsft
<Gallery />
## Model description
This model was trained using LoRA on a dataset consisting of GPT-4 Turbo generated summaries and code functions taken from CodeSearchNet. Only pythong functions.
## Download model
[Download](/dragostom24/Deepseek_FT/tree/main) them in the Files & versions tab.
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755046667
|
mang3dd
| 2025-08-13T01:23:52Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:23:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
emre0005/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lithe_humming_wombat
|
emre0005
| 2025-08-13T01:23:45Z
| 96
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lithe_humming_wombat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:00:46Z
|
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lithe_humming_wombat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stanpony/tiny_lm_8M_vanilla_full_20250812_205534
|
stanpony
| 2025-08-13T01:23:15Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T01:23:06Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bruhzair/prototype-0.4x316
|
bruhzair
| 2025-08-13T01:22:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T01:00:59Z
|
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x316
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
* /workspace/prototype-0.4x310
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
parameters:
weight: 0.5
density: 0.7
epsilon: 0.25
- model: /workspace/prototype-0.4x310
parameters:
weight: 0.5
density: 0.7
epsilon: 0.25
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: della
parameters:
normalize: false
lambda: 1.15
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
tokenizer:
source: base
dtype: float32
out_dtype: bfloat16
```
|
zacapa/SO101_scrunchie_policy_1
|
zacapa
| 2025-08-13T01:18:23Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zacapa/SO101_scrunchie_1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-13T01:15:46Z
|
---
datasets: zacapa/SO101_scrunchie_1
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
RE-N-Y/t5gemma-2b-2b-ul2-it
|
RE-N-Y
| 2025-08-13T01:14:00Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"t5_gemma_module",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T00:26:07Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755047557
|
IvanJAjebu
| 2025-08-13T01:13:42Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:13:33Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.0-var-assin2
|
g-assismoraes
| 2025-08-13T01:13:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T01:09:34Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blendinl/moondream2-drainage-lora
|
blendinl
| 2025-08-13T01:11:21Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"base_model:adapter:vikhyatk/moondream2",
"lora",
"transformers",
"text-generation",
"base_model:vikhyatk/moondream2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T01:06:00Z
|
---
library_name: peft
license: apache-2.0
base_model: vikhyatk/moondream2
tags:
- base_model:adapter:vikhyatk/moondream2
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: moondream2-drainage-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moondream2-drainage-lora
This model is a fine-tuned version of [vikhyatk/moondream2](https://huggingface.co/vikhyatk/moondream2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.17.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu128
- Datasets 2.19.1
- Tokenizers 0.21.4
|
NexVeridian/Hunyuan-4B-Instruct-4bit
|
NexVeridian
| 2025-08-13T01:09:32Z
| 0
| 0
|
mlx
|
[
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Instruct",
"base_model:quantized:tencent/Hunyuan-4B-Instruct",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-12T23:10:32Z
|
---
base_model: tencent/Hunyuan-4B-Instruct
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# NexVeridian/Hunyuan-4B-Instruct-4bit
This model [NexVeridian/Hunyuan-4B-Instruct-4bit](https://huggingface.co/NexVeridian/Hunyuan-4B-Instruct-4bit) was
converted to MLX format from [tencent/Hunyuan-4B-Instruct](https://huggingface.co/tencent/Hunyuan-4B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Hunyuan-4B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755045658
|
kojeklollipop
| 2025-08-13T01:06:43Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:06:32Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755045585
|
koloni
| 2025-08-13T01:04:36Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:04:30Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755046832
|
IvanJAjebu
| 2025-08-13T01:01:54Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:01:28Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CheapsetZero/965564f6-7c31-434d-ab93-3155a0cb53be
|
CheapsetZero
| 2025-08-13T01:00:46Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T00:49:01Z
|
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 965564f6-7c31-434d-ab93-3155a0cb53be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ccd86841471aa985_train_data.json
ds_type: json
field: prompt
path: /workspace/input_data/
split: train
type: completion
ddp_find_unused_parameters: false
debug: null
deepspeed: null
early_stopping_patience: null
ema_decay: 0.999
ema_update_after_step: 100
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
greater_is_better: false
group_by_length: false
hub_model_id: CheapsetZero/965564f6-7c31-434d-ab93-3155a0cb53be
learning_rate: 0.0001
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_nan_inf_filter: true
logging_steps: 1
lora_alpha: 256
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_steps: 8640
metric_for_best_model: eval_loss
micro_batch_size: 8
min_lr: 1.0e-05
mlflow_experiment_name: /tmp/ccd86841471aa985_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
reward_model_sampling_temperature: 0.7
s2_attention: null
sample_packing: false
save_total_limit: 3
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl:
beta: 0.015
max_completion_length: 1024
num_generations: 16
reward_funcs:
- rewards_331f1bc8-fd1a-45f2-8342-6a1255fa8cb4.reward_low_readability
- rewards_331f1bc8-fd1a-45f2-8342-6a1255fa8cb4.reward_specific_char_count
- rewards_331f1bc8-fd1a-45f2-8342-6a1255fa8cb4.reward_reasoning_keywords
- rewards_331f1bc8-fd1a-45f2-8342-6a1255fa8cb4.reward_high_readability
reward_weights:
- 3.87207489867356
- 0.937376822051259
- 7.290092942424822
- 2.502552982256603
use_vllm: false
trust_remote_code: true
use_ema: true
use_peft: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 331f1bc8-fd1a-45f2-8342-6a1255fa8cb4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 331f1bc8-fd1a-45f2-8342-6a1255fa8cb4
warmup_steps: 864
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 965564f6-7c31-434d-ab93-3155a0cb53be
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 864
- training_steps: 1202
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.3325 | 0.0025 | 1 | 3.7155 |
| 3.1448 | 0.2522 | 101 | 1.6412 |
| 3.0834 | 0.5044 | 202 | 1.4707 |
| 2.5995 | 0.7566 | 303 | 1.4762 |
| 3.2809 | 1.0087 | 404 | 1.5102 |
| 2.8995 | 1.2609 | 505 | 1.4836 |
| 2.7854 | 1.5131 | 606 | 1.5506 |
| 3.2836 | 1.7653 | 707 | 1.5887 |
| 2.6248 | 2.0175 | 808 | 1.4920 |
| 2.8987 | 2.2697 | 909 | 1.5846 |
| 2.946 | 2.5218 | 1010 | 1.5297 |
| 2.4191 | 2.7740 | 1111 | 1.4671 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aldsouza/health-agent-adpater-only
|
aldsouza
| 2025-08-13T01:00:24Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T01:00:10Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755045730
|
Sayemahsjn
| 2025-08-13T01:00:15Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:00:07Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pineappleSoup/DialoGPT-medium-707
|
pineappleSoup
| 2025-08-13T01:00:07Z
| 21
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"dataset:pineappleSoup/707_transcripts",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-17T02:47:25Z
|
---
tags:
- conversational
language:
- en
base_model:
- microsoft/DialoGPT-medium
datasets:
- pineappleSoup/707_transcripts
license: mit
---
# 707 DialoGPT Model
Chatbot for the character 707 from Mystic Messenger.
# Python Script to Set it up Locally + Connect to Discord
https://github.com/ShuangAnatoli/707
|
bruhzair/prototype-0.4x315
|
bruhzair
| 2025-08-13T00:58:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:37:44Z
|
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x315
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
### Models Merged
The following models were included in the merge:
* /workspace/prototype-0.4x310
* /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/prototype-0.4x310
parameters:
weight: [0.55]
- model: /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
parameters:
weight: [0.45]
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: multislerp
tokenizer:
source: base
chat_template: llama3
parameters:
normalize_weights: false
eps: 1e-8
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755046485
|
IvanJAjebu
| 2025-08-13T00:56:03Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:55:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AIDXteam/Qwen3-235B-A22B-Thinking-2507-AWQ
|
AIDXteam
| 2025-08-13T00:55:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:2402.17463",
"arxiv:2407.02490",
"arxiv:2501.15383",
"arxiv:2404.06654",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-13T00:16:51Z
|
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-235B-A22B-Instruct-2507
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## AWQ 양자화
본 모델은 llm-compressor를 통해 4bit로 양자화된 모델입니다.
## Highlights
We introduce the updated version of the **Qwen3-235B-A22B non-thinking mode**, named **Qwen3-235B-A22B-Instruct-2507**, featuring the following key enhancements:
- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
- **Enhanced capabilities** in **256K long-context understanding**.

## Model Overview
**Qwen3-235B-A22B-Instruct-2507** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 235B in total and 22B activated
- Number of Paramaters (Non-Embedding): 234B
- Number of Layers: 94
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: **262,144 natively and extendable up to 1,010,000 tokens**
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Performance
| | Deepseek-V3-0324 | GPT-4o-0327 | Claude Opus 4 Non-thinking | Kimi K2 | Qwen3-235B-A22B Non-thinking | Qwen3-235B-A22B-Instruct-2507 |
|--- | --- | --- | --- | --- | --- | ---|
| **Knowledge** | | | | | | |
| MMLU-Pro | 81.2 | 79.8 | **86.6** | 81.1 | 75.2 | 83.0 |
| MMLU-Redux | 90.4 | 91.3 | **94.2** | 92.7 | 89.2 | 93.1 |
| GPQA | 68.4 | 66.9 | 74.9 | 75.1 | 62.9 | **77.5** |
| SuperGPQA | 57.3 | 51.0 | 56.5 | 57.2 | 48.2 | **62.6** |
| SimpleQA | 27.2 | 40.3 | 22.8 | 31.0 | 12.2 | **54.3** |
| CSimpleQA | 71.1 | 60.2 | 68.0 | 74.5 | 60.8 | **84.3** |
| **Reasoning** | | | | | | |
| AIME25 | 46.6 | 26.7 | 33.9 | 49.5 | 24.7 | **70.3** |
| HMMT25 | 27.5 | 7.9 | 15.9 | 38.8 | 10.0 | **55.4** |
| ARC-AGI | 9.0 | 8.8 | 30.3 | 13.3 | 4.3 | **41.8** |
| ZebraLogic | 83.4 | 52.6 | - | 89.0 | 37.7 | **95.0** |
| LiveBench 20241125 | 66.9 | 63.7 | 74.6 | **76.4** | 62.5 | 75.4 |
| **Coding** | | | | | | |
| LiveCodeBench v6 (25.02-25.05) | 45.2 | 35.8 | 44.6 | 48.9 | 32.9 | **51.8** |
| MultiPL-E | 82.2 | 82.7 | **88.5** | 85.7 | 79.3 | 87.9 |
| Aider-Polyglot | 55.1 | 45.3 | **70.7** | 59.0 | 59.6 | 57.3 |
| **Alignment** | | | | | | |
| IFEval | 82.3 | 83.9 | 87.4 | **89.8** | 83.2 | 88.7 |
| Arena-Hard v2* | 45.6 | 61.9 | 51.5 | 66.1 | 52.0 | **79.2** |
| Creative Writing v3 | 81.6 | 84.9 | 83.8 | **88.1** | 80.4 | 87.5 |
| WritingBench | 74.5 | 75.5 | 79.2 | **86.2** | 77.0 | 85.2 |
| **Agent** | | | | | | |
| BFCL-v3 | 64.7 | 66.5 | 60.1 | 65.2 | 68.0 | **70.9** |
| TAU1-Retail | 49.6 | 60.3# | **81.4** | 70.7 | 65.2 | 71.3 |
| TAU1-Airline | 32.0 | 42.8# | **59.6** | 53.5 | 32.0 | 44.0 |
| TAU2-Retail | 71.1 | 66.7# | **75.5** | 70.6 | 64.9 | 74.6 |
| TAU2-Airline | 36.0 | 42.0# | 55.5 | **56.5** | 36.0 | 50.0 |
| TAU2-Telecom | 34.0 | 29.8# | 45.2 | **65.8** | 24.6 | 32.5 |
| **Multilingualism** | | | | | | |
| MultiIF | 66.5 | 70.4 | - | 76.2 | 70.2 | **77.5** |
| MMLU-ProX | 75.8 | 76.2 | - | 74.5 | 73.2 | **79.4** |
| INCLUDE | 80.1 | **82.1** | - | 76.9 | 75.6 | 79.5 |
| PolyMATH | 32.2 | 25.5 | 30.0 | 44.8 | 27.0 | **50.2** |
*: For reproducibility, we report the win rates evaluated by GPT-4.1.
\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.
## Quickstart
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Instruct-2507 --tp 8 --context-length 262144
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-235B-A22B-Instruct-2507 --tensor-parallel-size 8 --max-model-len 262144
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-235B-A22B-Instruct-2507',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Ultra-Long Texts
To support **ultra-long context processing** (up to **1 million tokens**), we integrate two key techniques:
- **[Dual Chunk Attention](https://arxiv.org/abs/2402.17463) (DCA)**: A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence.
- **[MInference](https://arxiv.org/abs/2407.02490)**: A sparse attention mechanism that reduces computational overhead by focusing on critical token interactions.
Together, these innovations significantly improve both **generation quality** and **inference efficiency** for sequences beyond 256K tokens. On sequences approaching 1M tokens, the system achieves up to a **3× speedup** compared to standard attention implementations.
For full technical details, see the [Qwen2.5-1M Technical Report](https://arxiv.org/abs/2501.15383).
### How to Enable 1M Token Context
> [!NOTE]
> To effectively process a 1 million token context, users will require approximately **1000 GB** of total GPU memory. This accounts for model weights, KV-cache storage, and peak activation memory demands.
#### Step 1: Update Configuration File
Download the model and replace the content of your `config.json` with `config_1m.json`, which includes the config for length extrapolation and sparse attention.
```bash
export MODELNAME=Qwen3-235B-A22B-Instruct-2507
huggingface-cli download Qwen/${MODELNAME} --local-dir ${MODELNAME}
mv ${MODELNAME}/config.json ${MODELNAME}/config.json.bak
mv ${MODELNAME}/config_1m.json ${MODELNAME}/config.json
```
#### Step 2: Launch Model Server
After updating the config, proceed with either **vLLM** or **SGLang** for serving the model.
#### Option 1: Using vLLM
To run Qwen with 1M context support:
```bash
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .
```
Then launch the server with Dual Chunk Flash Attention enabled:
```bash
VLLM_ATTENTION_BACKEND=DUAL_CHUNK_FLASH_ATTN VLLM_USE_V1=0 \
vllm serve ./Qwen3-235B-A22B-Instruct-2507 \
--tensor-parallel-size 8 \
--max-model-len 1010000 \
--enable-chunked-prefill \
--max-num-batched-tokens 131072 \
--enforce-eager \
--max-num-seqs 1 \
--gpu-memory-utilization 0.85
```
##### Key Parameters
| Parameter | Purpose |
|--------|--------|
| `VLLM_ATTENTION_BACKEND=DUAL_CHUNK_FLASH_ATTN` | Enables the custom attention kernel for long-context efficiency |
| `--max-model-len 1010000` | Sets maximum context length to ~1M tokens |
| `--enable-chunked-prefill` | Allows chunked prefill for very long inputs (avoids OOM) |
| `--max-num-batched-tokens 131072` | Controls batch size during prefill; balances throughput and memory |
| `--enforce-eager` | Disables CUDA graph capture (required for dual chunk attention) |
| `--max-num-seqs 1` | Limits concurrent sequences due to extreme memory usage |
| `--gpu-memory-utilization 0.85` | Set the fraction of GPU memory to be used for the model executor |
#### Option 2: Using SGLang
First, clone and install the specialized branch:
```bash
git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install -e "python[all]"
```
Launch the server with DCA support:
```bash
python3 -m sglang.launch_server \
--model-path ./Qwen3-235B-A22B-Instruct-2507 \
--context-length 1010000 \
--mem-frac 0.75 \
--attention-backend dual_chunk_flash_attn \
--tp 8 \
--chunked-prefill-size 131072
```
##### Key Parameters
| Parameter | Purpose |
|---------|--------|
| `--attention-backend dual_chunk_flash_attn` | Activates Dual Chunk Flash Attention |
| `--context-length 1010000` | Defines max input length |
| `--mem-frac 0.75` | The fraction of the memory used for static allocation (model weights and KV cache memory pool). Use a smaller value if you see out-of-memory errors. |
| `--tp 8` | Tensor parallelism size (matches model sharding) |
| `--chunked-prefill-size 131072` | Prefill chunk size for handling long inputs without OOM |
#### Troubleshooting:
1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache." or "RuntimeError: Not enough memory. Please try to increase --mem-fraction-static."
The VRAM reserved for the KV cache is insufficient.
- vLLM: Consider reducing the ``max_model_len`` or increasing the ``tensor_parallel_size`` and ``gpu_memory_utilization``. Alternatively, you can reduce ``max_num_batched_tokens``, although this may significantly slow down inference.
- SGLang: Consider reducing the ``context-length`` or increasing the ``tp`` and ``mem-frac``. Alternatively, you can reduce ``chunked-prefill-size``, although this may significantly slow down inference.
2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
The VRAM reserved for activation weights is insufficient. You can try lowering ``gpu_memory_utilization`` or ``mem-frac``, but be aware that this might reduce the VRAM available for the KV cache.
3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager." or "The input (xxx xtokens) is longer than the model's context length (xxx tokens)."
The input is too lengthy. Consider using a shorter sequence or increasing the ``max_model_len`` or ``context-length``.
#### Long-Context Performance
We test the model on an 1M version of the [RULER](https://arxiv.org/abs/2404.06654) benchmark.
| Model Name | Acc avg | 4k | 8k | 16k | 32k | 64k | 96k | 128k | 192k | 256k | 384k | 512k | 640k | 768k | 896k | 1000k |
|---------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|-------|
| Qwen3-235B-A22B (Non-Thinking) | 83.9 | 97.7 | 96.1 | 97.5 | 96.1 | 94.2 | 90.3 | 88.5 | 85.0 | 82.1 | 79.2 | 74.4 | 70.0 | 71.0 | 68.5 | 68.0 |
| Qwen3-235B-A22B-Instruct-2507 (Full Attention) | 92.5 | 98.5 | 97.6 | 96.9 | 97.3 | 95.8 | 94.9 | 93.9 | 94.5 | 91.0 | 92.2 | 90.9 | 87.8 | 84.8 | 86.5 | 84.5 |
| Qwen3-235B-A22B-Instruct-2507 (Sparse Attention) | 91.7 | 98.5 | 97.2 | 97.3 | 97.7 | 96.6 | 94.6 | 92.8 | 94.3 | 90.5 | 89.7 | 89.5 | 86.4 | 83.6 | 84.2 | 82.5 |
* All models are evaluated with Dual Chunk Attention enabled.
* Since the evaluation is time-consuming, we use 260 samples for each length (13 sub-tasks, 20 samples for each).
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
@article{qwen2.5-1m,
title={Qwen2.5-1M Technical Report},
author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
journal={arXiv preprint arXiv:2501.15383},
year={2025}
}
```
|
stanpony/tiny_lm_8M_normal_tokenizer_vanilla_full_20250812_224726
|
stanpony
| 2025-08-13T00:53:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:53:21Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amoogeona/ppo-Huggy
|
amoogeona
| 2025-08-13T00:53:02Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-08-13T00:52:39Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: amoogeona/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jimanex/blockassist-bc-rangy_peaceful_stingray_1755046107
|
jimanex
| 2025-08-13T00:51:44Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rangy peaceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:51:39Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rangy peaceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755046214
|
IvanJAjebu
| 2025-08-13T00:51:33Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:51:09Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yashm-cerebras/qwen3-verifier-skywork-v7-2pointwise
|
yashm-cerebras
| 2025-08-13T00:45:45Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:41:42Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755044271
|
calegpedia
| 2025-08-13T00:44:03Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:44:00Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ncgc/truth_statichh-pythia-2.8b-sft-bf16_bottom100_lr0.024
|
ncgc
| 2025-08-13T00:42:46Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:31:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755044279
|
indoempatnol
| 2025-08-13T00:42:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:42:16Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755045596
|
IvanJAjebu
| 2025-08-13T00:41:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:40:57Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Soughing/gla_zero_init_medium
|
Soughing
| 2025-08-13T00:41:00Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-04T05:41:14Z
|
---
license: apache-2.0
---
|
owalsh/SplitSubdirSmolLM3
|
owalsh
| 2025-08-13T00:40:17Z
| 0
| 0
| null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T00:35:12Z
|
---
license: apache-2.0
---
Original model: https://huggingface.co/ggml-org/SmolLM3-3B-GGUF
gguf-split with --split-max-max 100M to create a small model to test split ggufs and moved to a subdir
|
bruhzair/prototype-0.4x314
|
bruhzair
| 2025-08-13T00:37:05Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:14:08Z
|
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x314
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--BruhzWater--Liliths-Whisper-L3.3-70b-0.2a/snapshots/825104bfaa9044ed70d94bdbd72d979de132c743
* /workspace/prototype-0.4x312
* /workspace/prototype-0.4x310
* /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.4a/snapshots/64723af7b548b0f19e8b4b3867117393282c7839
parameters:
weight: 0.25
density: 0.7
epsilon: 0.25
- model: /workspace/prototype-0.4x310
parameters:
weight: 0.25
density: 0.75
epsilon: 0.2
- model: /workspace/prototype-0.4x312
parameters:
weight: 0.25
density: 0.65
epsilon: 0.15
- model: /workspace/cache/models--BruhzWater--Liliths-Whisper-L3.3-70b-0.2a/snapshots/825104bfaa9044ed70d94bdbd72d979de132c743
parameters:
weight: 0.25
density: 0.7
epsilon: 0.25
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: della
parameters:
normalize: false
lambda: 1.05
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
tokenizer:
source: base
dtype: float32
out_dtype: bfloat16
```
|
Setsuna007/ft-tech-v14-bge-reranker-v2-m3
|
Setsuna007
| 2025-08-13T00:34:26Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:99438",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"arxiv:1908.10084",
"base_model:BAAI/bge-reranker-v2-m3",
"base_model:finetune:BAAI/bge-reranker-v2-m3",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-13T00:33:26Z
|
---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:99438
- loss:BinaryCrossEntropyLoss
base_model: BAAI/bge-reranker-v2-m3
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on BAAI/bge-reranker-v2-m3
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: train eval
type: train-eval
metrics:
- type: map
value: 0.8378990101212324
name: Map
- type: mrr@10
value: 0.8378990101212324
name: Mrr@10
- type: ndcg@10
value: 0.8792290106002533
name: Ndcg@10
---
# CrossEncoder based on BAAI/bge-reranker-v2-m3
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) <!-- at revision 953dc6f6f85a1b2dbfca4c34a2796e7dde08d41e -->
- **Maximum Sequence Length:** 1280 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
['微信支付商户RESTAPI中,Authorization值的认证类型应该是什么?', '/商户平台(新版V3)/通用规则/开发须知/常见问题/报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》 # 常见问题-报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》\n(1)请确认Authorization值是在一行,红框的位置是一个空格而非换行符\n\n\n\n(2)请确认认证类型为WECHATPAY2-SHA256-RSA2048\n\n(3)确认包含了mchid(商户号)、nonce\\_str(随机支付串)、serial\\_no(商户API证书序列号)、timestamp(时间戳)、signature(签名值)参数,且这些参数和计算签名的参数值保持一致\n\n(4)Authorization里的参数是双引号,不是单引号。 nonce\\_str="593BEC0C930BF1AFEB40B4A08C8FB242"而不是nonce\\_str=\'593BEC0C930BF1AFEB40B4A08C8FB242\'\n\n(5)Http头Authorization值都是英文下的符号,而不能是中文符号,参数之间也不要有空格'],
['在小程序合单支付中,申请资金账单的接口请求方式是什么?', '/商户平台(新版V3)/产品文档/支付产品/小程序支付/API列表/申请资金账单 # 小程序支付-申请资金账单\n## 下载接口说明\n微信支付按天提供商户各账户的资金流水账单文件,商户可以通过该接口获取账单文件的下载地址。账单文件详细记录了账户资金操作的相关信息,包括业务单号、收支金额及记账时间等,以便商户进行核对与确认。详细介绍参考:下载账单-产品介绍。\n\n注意:\n\n- 资金账单中的数据反映的是商户微信账户资金变动情况;\n\n- 当日账单将在次日上午9点开始生成,建议商户在次日上午10点以后获取;\n\n- 资金账单中所有涉及金额的字段均以“元”为单位。\n\n\n## 文件格式说明\n\n账单文件主要由明细数据和汇总数据两大部分构成,每部分均包含一行表头以及多行详细数据。\n\n明细数据的每一行都代表一笔具体的资金操作。为防止数据在Excel中被自动转换为科学计数法,每项数据前均添加了字符 `` ` ``。若需汇总计算金额等数据,可以批量移除该字符。\n\n## 接口说明\n\n支持商户:【普通商户】\n\n请求方式:【GET】 `/v3/bill/fundflowbill`\n\n请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点\n\n\u3000\u3000\u3000\u3000\u3000【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看\n\n## 请求参数\n\n### Header 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| Authorization | string | true | 请参考 签名认证 生成认证信息 |\n| Accept | string | true | 请设置为 `application/json` |'],
['合单H5支付请求头中需要设置哪些参数?', '/合作伙伴平台(新版V3)/产品文档/支付产品/合单支付/H5合单支付/API列表/查询合单订单 # H5合单支付-查询合单订单 | 合作伙伴\n合单支付下单成功后,服务商(合单发起方)可调用该接口查询合单订单的交易状态。\n\n注意:\n\n请勿使用非合单支付的查单接口查询合单订单。\n\n## 接口说明\n支持商户:【普通服务商】\n\n请求方式:【GET】 /v3/combine-transactions/out-trade-no/{combine_out_trade_no}\n\n请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点\n\n【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看\n\n## 请求参数\n### Header 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| Authorization | string | true | 请参考 签名认证 生成认证信息 |\n| Accept | string | true | 请设置为 application/json |\n\n### Path 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| combine_out_trade_no | string(32) | true | 【合单商户订单号】下单时传入的合单商户订单号。 |\n\n请求示例\n\nGET\n\n```\ncurl -X GET \\\n https://api.mch.weixin.qq.com/v3/combine-transactions/out-trade-no/P20150806125346 \\\n -H "Authorization: WECHATPAY2-SHA256-RSA2048 mchid=\\"1900000001\\",..." \\\n -H "Accept: application/json"\n\n```\n## 应答参数\n200 OK'],
['小微商户的资金能力包括哪些?与普通商户相比有什么不同?', '/合作伙伴平台(新版V3)/通用规则/名词表/接入模式 # 名词表-接入模式 | 合作伙伴\n## 普通商户模式\n普通商户模式是指商户自行开发系统来对接微信支付进行交易,微信支付将资金直接结算到商户的结算账户,商户给用户提供支付服务。该模式要求商户具备系统开发能力,商户可自行前往 商户平台 完成入驻。\n\n该模式下数据流与资金流如图所示:\n\n\n\n数据流\n\n\n\n资金流\n\n该模式下常用参数说明如下:\n\n| 参数名称 | 参数说明 |\n| :-- | :-- |\n| AppID | 商户应用载体的AppID,可以是公众号,小程序或App |\n| mchid | 商户在微信侧申请入驻的收款账号 |\n| API v3密钥 | 商户在商户平台设置的API v3密钥,主要用于对敏感字段信息的加密或解密,具体设置流程请参考各产品接入前准备说明 |\n| 商户API证书 | 商户在商户平台下载的证书,主要用于API请求的签名生成及验证,具体下载操作说明请参考各产品接入前准备说明 |\n| OpenID | 用户在直连商户应用下的用户标示<br>OpenID获取详见参数说明 4\\. OpenID |\n\n## 服务商模式\n\n服务商模式是指针对市面上一些中小型且没有开发能力的商户,由已在微信支付官方注册入驻的系统开发商或解决方案提供商协助这些商户完成入驻,开发及日常运营工作的模式。服务商可前往 服务商平台 完成注册入驻。\n\n该模式下数据流与资金流如图所示:'],
['在合单支付的JSAPI调起支付中,nonceStr参数有什么要求?', '/商户平台(新版V3)/产品文档/支付产品/JSAPI支付/API列表/JSAPI调起支付 # JSAPI支付-JSAPI调起支付\n商户通过JSAPI/小程序下单接口获取到发起支付的必要参数prepay\\_id后,再通过微信浏览器内置对象方法(WeixinJSBridge)调起微信支付收银台。\n\n## 接口说明\n\n支持商户:【普通商户】\n\n## 字段说明\n\n### 请求参数\n\n此API签名无后台接口交互,需要将列表中的数据签名\n\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| appId | string(32) | true | 填写下单时传入的appid,且必需与当前实际调起支付的公众号appid一致,否则无法调起支付。 |\n| timeStamp | string(32) | true | Unix 时间戳,是从1970年1月1日(UTC/GMT的午夜)开始所经过的秒数。<br>注意:常见时间戳为秒级或毫秒级,该处必需传秒级时间戳。 |\n| nonceStr | string(32) | true | 随机字符串,不长于32位。该值建议使用随机数算法生成。 |\n| package | string(128) | true | 订单详情扩展字符串,JSAPI下单接口返回的prepay\\_id参数值,提交格式如:prepay\\_id=\\*\\*\\*。 |\n| signType | string(32) | true | 签名类型,固定填RSA。 |\n\n请求示例\n\n示例代码:\n\n```\nfunction onBridgeReady() {\n WeixinJSBridge.invoke(\'getBrandWCPayRequest\', {\n "appId": "wx2421b1c4370ec43b", //公众号ID,由商户传入\n "timeStamp": "1395712654", //时间戳,自1970年以来的秒数\n "nonceStr": "e61463f8efa94090b1f366cccfbbb444", //随机串\n "package": "prepay_id=wx21201855730335ac86f8c43d1889123400",\n "signType": "RSA", //微信签名方式:\n```'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'微信支付商户RESTAPI中,Authorization值的认证类型应该是什么?',
[
'/商户平台(新版V3)/通用规则/开发须知/常见问题/报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》 # 常见问题-报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》\n(1)请确认Authorization值是在一行,红框的位置是一个空格而非换行符\n\n\n\n(2)请确认认证类型为WECHATPAY2-SHA256-RSA2048\n\n(3)确认包含了mchid(商户号)、nonce\\_str(随机支付串)、serial\\_no(商户API证书序列号)、timestamp(时间戳)、signature(签名值)参数,且这些参数和计算签名的参数值保持一致\n\n(4)Authorization里的参数是双引号,不是单引号。 nonce\\_str="593BEC0C930BF1AFEB40B4A08C8FB242"而不是nonce\\_str=\'593BEC0C930BF1AFEB40B4A08C8FB242\'\n\n(5)Http头Authorization值都是英文下的符号,而不能是中文符号,参数之间也不要有空格',
'/商户平台(新版V3)/产品文档/支付产品/小程序支付/API列表/申请资金账单 # 小程序支付-申请资金账单\n## 下载接口说明\n微信支付按天提供商户各账户的资金流水账单文件,商户可以通过该接口获取账单文件的下载地址。账单文件详细记录了账户资金操作的相关信息,包括业务单号、收支金额及记账时间等,以便商户进行核对与确认。详细介绍参考:下载账单-产品介绍。\n\n注意:\n\n- 资金账单中的数据反映的是商户微信账户资金变动情况;\n\n- 当日账单将在次日上午9点开始生成,建议商户在次日上午10点以后获取;\n\n- 资金账单中所有涉及金额的字段均以“元”为单位。\n\n\n## 文件格式说明\n\n账单文件主要由明细数据和汇总数据两大部分构成,每部分均包含一行表头以及多行详细数据。\n\n明细数据的每一行都代表一笔具体的资金操作。为防止数据在Excel中被自动转换为科学计数法,每项数据前均添加了字符 `` ` ``。若需汇总计算金额等数据,可以批量移除该字符。\n\n## 接口说明\n\n支持商户:【普通商户】\n\n请求方式:【GET】 `/v3/bill/fundflowbill`\n\n请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点\n\n\u3000\u3000\u3000\u3000\u3000【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看\n\n## 请求参数\n\n### Header 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| Authorization | string | true | 请参考 签名认证 生成认证信息 |\n| Accept | string | true | 请设置为 `application/json` |',
'/合作伙伴平台(新版V3)/产品文档/支付产品/合单支付/H5合单支付/API列表/查询合单订单 # H5合单支付-查询合单订单 | 合作伙伴\n合单支付下单成功后,服务商(合单发起方)可调用该接口查询合单订单的交易状态。\n\n注意:\n\n请勿使用非合单支付的查单接口查询合单订单。\n\n## 接口说明\n支持商户:【普通服务商】\n\n请求方式:【GET】 /v3/combine-transactions/out-trade-no/{combine_out_trade_no}\n\n请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点\n\n【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看\n\n## 请求参数\n### Header 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| Authorization | string | true | 请参考 签名认证 生成认证信息 |\n| Accept | string | true | 请设置为 application/json |\n\n### Path 参数\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| combine_out_trade_no | string(32) | true | 【合单商户订单号】下单时传入的合单商户订单号。 |\n\n请求示例\n\nGET\n\n```\ncurl -X GET \\\n https://api.mch.weixin.qq.com/v3/combine-transactions/out-trade-no/P20150806125346 \\\n -H "Authorization: WECHATPAY2-SHA256-RSA2048 mchid=\\"1900000001\\",..." \\\n -H "Accept: application/json"\n\n```\n## 应答参数\n200 OK',
'/合作伙伴平台(新版V3)/通用规则/名词表/接入模式 # 名词表-接入模式 | 合作伙伴\n## 普通商户模式\n普通商户模式是指商户自行开发系统来对接微信支付进行交易,微信支付将资金直接结算到商户的结算账户,商户给用户提供支付服务。该模式要求商户具备系统开发能力,商户可自行前往 商户平台 完成入驻。\n\n该模式下数据流与资金流如图所示:\n\n\n\n数据流\n\n\n\n资金流\n\n该模式下常用参数说明如下:\n\n| 参数名称 | 参数说明 |\n| :-- | :-- |\n| AppID | 商户应用载体的AppID,可以是公众号,小程序或App |\n| mchid | 商户在微信侧申请入驻的收款账号 |\n| API v3密钥 | 商户在商户平台设置的API v3密钥,主要用于对敏感字段信息的加密或解密,具体设置流程请参考各产品接入前准备说明 |\n| 商户API证书 | 商户在商户平台下载的证书,主要用于API请求的签名生成及验证,具体下载操作说明请参考各产品接入前准备说明 |\n| OpenID | 用户在直连商户应用下的用户标示<br>OpenID获取详见参数说明 4\\. OpenID |\n\n## 服务商模式\n\n服务商模式是指针对市面上一些中小型且没有开发能力的商户,由已在微信支付官方注册入驻的系统开发商或解决方案提供商协助这些商户完成入驻,开发及日常运营工作的模式。服务商可前往 服务商平台 完成注册入驻。\n\n该模式下数据流与资金流如图所示:',
'/商户平台(新版V3)/产品文档/支付产品/JSAPI支付/API列表/JSAPI调起支付 # JSAPI支付-JSAPI调起支付\n商户通过JSAPI/小程序下单接口获取到发起支付的必要参数prepay\\_id后,再通过微信浏览器内置对象方法(WeixinJSBridge)调起微信支付收银台。\n\n## 接口说明\n\n支持商户:【普通商户】\n\n## 字段说明\n\n### 请求参数\n\n此API签名无后台接口交互,需要将列表中的数据签名\n\n| 参数名 | 类型 | 必填 | 描述 |\n| --- | --- | --- | --- |\n| appId | string(32) | true | 填写下单时传入的appid,且必需与当前实际调起支付的公众号appid一致,否则无法调起支付。 |\n| timeStamp | string(32) | true | Unix 时间戳,是从1970年1月1日(UTC/GMT的午夜)开始所经过的秒数。<br>注意:常见时间戳为秒级或毫秒级,该处必需传秒级时间戳。 |\n| nonceStr | string(32) | true | 随机字符串,不长于32位。该值建议使用随机数算法生成。 |\n| package | string(128) | true | 订单详情扩展字符串,JSAPI下单接口返回的prepay\\_id参数值,提交格式如:prepay\\_id=\\*\\*\\*。 |\n| signType | string(32) | true | 签名类型,固定填RSA。 |\n\n请求示例\n\n示例代码:\n\n```\nfunction onBridgeReady() {\n WeixinJSBridge.invoke(\'getBrandWCPayRequest\', {\n "appId": "wx2421b1c4370ec43b", //公众号ID,由商户传入\n "timeStamp": "1395712654", //时间戳,自1970年以来的秒数\n "nonceStr": "e61463f8efa94090b1f366cccfbbb444", //随机串\n "package": "prepay_id=wx21201855730335ac86f8c43d1889123400",\n "signType": "RSA", //微信签名方式:\n```',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Dataset: `train-eval`
* Evaluated with [<code>CERerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CERerankingEvaluator) with these parameters:
```json
{
"at_k": 10
}
```
| Metric | Value |
|:------------|:-----------|
| map | 0.8379 |
| mrr@10 | 0.8379 |
| **ndcg@10** | **0.8792** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 99,438 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 12 characters</li><li>mean: 30.73 characters</li><li>max: 70 characters</li></ul> | <ul><li>min: 85 characters</li><li>mean: 841.76 characters</li><li>max: 2321 characters</li></ul> | <ul><li>min: 0.2</li><li>mean: 0.7</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>微信支付商户RESTAPI中,Authorization值的认证类型应该是什么?</code> | <code>/商户平台(新版V3)/通用规则/开发须知/常见问题/报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》 # 常见问题-报错:Http头Authorization值格式错误,请参考《微信支付商户REST API签名规则》<br>(1)请确认Authorization值是在一行,红框的位置是一个空格而非换行符<br><br><br><br>(2)请确认认证类型为WECHATPAY2-SHA256-RSA2048<br><br>(3)确认包含了mchid(商户号)、nonce\_str(随机支付串)、serial\_no(商户API证书序列号)、timestamp(时间戳)、signature(签名值)参数,且这些参数和计算签名的参数值保持一致<br><br>(4)Authorization里的参数是双引号,不是单引号。 nonce\_str="593BEC0C930BF1AFEB40B4A08C8FB242"而不是nonce\_str='593BEC0C930BF1AFEB40B4A08C8FB242'<br><br>(5)Http头Authorization值都是英文下的符号,而不能是中文符号,参数之间也不要有空格</code> | <code>0.7</code> |
| <code>在小程序合单支付中,申请资金账单的接口请求方式是什么?</code> | <code>/商户平台(新版V3)/产品文档/支付产品/小程序支付/API列表/申请资金账单 # 小程序支付-申请资金账单<br>## 下载接口说明<br>微信支付按天提供商户各账户的资金流水账单文件,商户可以通过该接口获取账单文件的下载地址。账单文件详细记录了账户资金操作的相关信息,包括业务单号、收支金额及记账时间等,以便商户进行核对与确认。详细介绍参考:下载账单-产品介绍。<br><br>注意:<br><br>- 资金账单中的数据反映的是商户微信账户资金变动情况;<br><br>- 当日账单将在次日上午9点开始生成,建议商户在次日上午10点以后获取;<br><br>- 资金账单中所有涉及金额的字段均以“元”为单位。<br><br><br>## 文件格式说明<br><br>账单文件主要由明细数据和汇总数据两大部分构成,每部分均包含一行表头以及多行详细数据。<br><br>明细数据的每一行都代表一笔具体的资金操作。为防止数据在Excel中被自动转换为科学计数法,每项数据前均添加了字符 `` ` ``。若需汇总计算金额等数据,可以批量移除该字符。<br><br>## 接口说明<br><br>支持商户:【普通商户】<br><br>请求方式:【GET】 `/v3/bill/fundflowbill`<br><br>请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点<br><br> 【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看<br><br>## 请求参数<br><br>### Header 参数<br>| 参数名 | 类型 | 必填 | 描述 |<br>| --- | --- | --- | --- |<br>| Authorization | string | true | 请参考 签名认证 生成认证信息 |<br>| Accept | string | true | 请设置为 `application/json` |</code> | <code>0.5</code> |
| <code>合单H5支付请求头中需要设置哪些参数?</code> | <code>/合作伙伴平台(新版V3)/产品文档/支付产品/合单支付/H5合单支付/API列表/查询合单订单 # H5合单支付-查询合单订单 | 合作伙伴<br>合单支付下单成功后,服务商(合单发起方)可调用该接口查询合单订单的交易状态。<br><br>注意:<br><br>请勿使用非合单支付的查单接口查询合单订单。<br><br>## 接口说明<br>支持商户:【普通服务商】<br><br>请求方式:【GET】 /v3/combine-transactions/out-trade-no/{combine_out_trade_no}<br><br>请求域名:【主域名】https://api.mch.weixin.qq.com 使用该域名将访问就近的接入点<br><br>【备域名】https://api2.mch.weixin.qq.com 使用该域名将访问异地的接入点,指引点击查看<br><br>## 请求参数<br>### Header 参数<br>| 参数名 | 类型 | 必填 | 描述 |<br>| --- | --- | --- | --- |<br>| Authorization | string | true | 请参考 签名认证 生成认证信息 |<br>| Accept | string | true | 请设置为 application/json |<br><br>### Path 参数<br>| 参数名 | 类型 | 必填 | 描述 |<br>| --- | --- | --- | --- |<br>| combine_out_trade_no | string(32) | true | 【合单商户订单号】下单时传入的合单商户订单号。 |<br><br>请求示例<br><br>GET<br><br>```<br>curl -X GET \<br> https://api.mch.weixin.qq.com/v3/combine-transactions/out-trade-no/P20150806125346 \<br> -H "Authorization: WECHATPAY2-SHA256-RSA2048 mchid=\"1900000001\",..." \<br> -H "Accept: application/json"<br><br>```<br>## 应答参数<br>200 OK</code> | <code>0.5</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `num_train_epochs`: 1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | train-eval_ndcg@10 |
|:------:|:-----:|:-------------:|:------------------:|
| 0.0201 | 500 | 0.9121 | - |
| 0.0402 | 1000 | 0.7186 | - |
| 0.0603 | 1500 | 0.6168 | - |
| 0.0805 | 2000 | 0.6024 | - |
| 0.1006 | 2500 | 0.5882 | - |
| 0.1207 | 3000 | 0.5854 | - |
| 0.1408 | 3500 | 0.5601 | - |
| 0.1609 | 4000 | 0.5661 | - |
| 0.1810 | 4500 | 0.5655 | - |
| 0.2011 | 5000 | 0.5617 | 0.8584 |
| 0.2212 | 5500 | 0.5632 | - |
| 0.2414 | 6000 | 0.5679 | - |
| 0.2615 | 6500 | 0.5591 | - |
| 0.2816 | 7000 | 0.5533 | - |
| 0.3017 | 7500 | 0.5556 | - |
| 0.3218 | 8000 | 0.5518 | - |
| 0.3419 | 8500 | 0.5549 | - |
| 0.3620 | 9000 | 0.5598 | - |
| 0.3821 | 9500 | 0.555 | - |
| 0.4023 | 10000 | 0.5577 | 0.8715 |
| 0.4224 | 10500 | 0.561 | - |
| 0.4425 | 11000 | 0.5626 | - |
| 0.4626 | 11500 | 0.5517 | - |
| 0.4827 | 12000 | 0.5454 | - |
| 0.5028 | 12500 | 0.5551 | - |
| 0.5229 | 13000 | 0.5547 | - |
| 0.5430 | 13500 | 0.5494 | - |
| 0.5632 | 14000 | 0.5542 | - |
| 0.5833 | 14500 | 0.5533 | - |
| 0.6034 | 15000 | 0.5533 | 0.8769 |
| 0.6235 | 15500 | 0.552 | - |
| 0.6436 | 16000 | 0.5553 | - |
| 0.6637 | 16500 | 0.5518 | - |
| 0.6838 | 17000 | 0.5458 | - |
| 0.7039 | 17500 | 0.5433 | - |
| 0.7241 | 18000 | 0.5433 | - |
| 0.7442 | 18500 | 0.558 | - |
| 0.7643 | 19000 | 0.5494 | - |
| 0.7844 | 19500 | 0.5444 | - |
| 0.8045 | 20000 | 0.5482 | 0.8792 |
### Framework Versions
- Python: 3.9.20
- Sentence Transformers: 5.0.0
- Transformers: 4.54.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RE-N-Y/t5gemma-2b-2b-prefixlm-it
|
RE-N-Y
| 2025-08-13T00:34:06Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"t5_gemma_module",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T00:31:56Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hayashizawa/blockassist-bc-whiskered_poisonous_capybara_1755045036
|
hayashizawa
| 2025-08-13T00:32:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered poisonous capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:31:43Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered poisonous capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755044008
|
Sayemahsjn
| 2025-08-13T00:31:09Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:31:04Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mpopescu99/unsloth-skincarebot
|
mpopescu99
| 2025-08-13T00:30:27Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T22:30:59Z
|
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hayashizawa/blockassist-bc-whiskered_poisonous_capybara_1755044788
|
hayashizawa
| 2025-08-13T00:27:51Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered poisonous capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:27:28Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered poisonous capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Devique/Calmiq-lora4
|
Devique
| 2025-08-13T00:27:43Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-11T12:25:18Z
|
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Devique
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/AMD-Llama-350M-Upgraded-GGUF
|
mradermacher
| 2025-08-13T00:27:24Z
| 69
| 0
|
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"llama",
"reflex-ai",
"en",
"base_model:motionlabs/AMD-Llama-350M-Upgraded",
"base_model:quantized:motionlabs/AMD-Llama-350M-Upgraded",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-19T12:38:41Z
|
---
base_model: motionlabs/AMD-Llama-350M-Upgraded
language: en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- causal-lm
- transformers
- llama
- reflex-ai
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/motionlabs/AMD-Llama-350M-Upgraded
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#AMD-Llama-350M-Upgraded-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-350M-Upgraded-GGUF/resolve/main/AMD-Llama-350M-Upgraded.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KoichiYasuoka/roberta-large-wechsel-ukrainian-ud-goeswith
|
KoichiYasuoka
| 2025-08-13T00:27:24Z
| 0
| 0
| null |
[
"pytorch",
"roberta",
"ukrainian",
"token-classification",
"pos",
"dependency-parsing",
"uk",
"dataset:universal_dependencies",
"base_model:benjamin/roberta-large-wechsel-ukrainian",
"base_model:finetune:benjamin/roberta-large-wechsel-ukrainian",
"license:mit",
"region:us"
] |
token-classification
| 2025-08-13T00:24:14Z
|
---
language:
- "uk"
tags:
- "ukrainian"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: benjamin/roberta-large-wechsel-ukrainian
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
---
# roberta-large-wechsel-ukrainian-ud-goeswith
## Model Description
This is a RoBERTa model for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-large-wechsel-ukrainian](https://huggingface.co/benjamin/roberta-large-wechsel-ukrainian).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-large-wechsel-ukrainian-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("Біжать алеї звуків, саджених у гами."))
```
|
hardlyworking/MS32-3_Merged
|
hardlyworking
| 2025-08-13T00:25:58Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"base_model:merge:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"base_model:hardlyworking/MS32-3",
"base_model:merge:hardlyworking/MS32-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:20:56Z
|
---
base_model:
- anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
- hardlyworking/MS32-3
library_name: transformers
tags:
- mergekit
- merge
---
# MS32-3_Merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only](https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only) + [hardlyworking/MS32-3](https://huggingface.co/hardlyworking/MS32-3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
models:
- model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only+hardlyworking/MS32-3
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.