modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
jusstanother/blockassist-bc-ferocious_reclusive_emu_1755069057
jusstanother
2025-08-13T07:11:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "ferocious reclusive emu", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:11:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - ferocious reclusive emu --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lemms/openllm-small-extended-7k
lemms
2025-08-13T07:11:00Z
0
0
transformers
[ "transformers", "pytorch", "gpt", "text-generation", "language-model", "open-source", "transformer", "causal-lm", "en", "dataset:squad", "license:gpl-3.0", "license:other", "model-index", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T07:00:24Z
--- language: - en license: - gpl-3.0 - other tags: - text-generation - language-model - open-source - gpt - transformer - causal-lm datasets: - squad metrics: - perplexity - loss library_name: transformers pipeline_tag: text-generation model-index: - name: OpenLLM Small Extended 7K results: - task: type: text-generation dataset: type: squad name: Wikipedia passages from SQuAD metrics: - type: loss value: 2.1 - type: perplexity value: 8.2 --- # OpenLLM Small Extended 7K Model <!-- Copyright (C) 2024 Louis Chua Bean Chong --> <!-- This file is part of OpenLLM - dual-licensed under GPLv3 and Commercial License --> ## 🌟 Model Overview This is the **OpenLLM Small Extended 7K** model, a 35.8M parameter GPT-style language model trained for 7,000 steps on Wikipedia passages from the SQuAD dataset. This model represents the latest iteration of our small model architecture with extended training. ### **📊 Model Specifications** - **Architecture**: GPT-style Transformer - **Parameters**: 35,823,616 (35.8M) - **Layers**: 6 transformer layers - **Heads**: 8 attention heads - **Embedding Dimension**: 512 - **Vocabulary Size**: 32,000 tokens - **Context Length**: 1,024 tokens - **Training Steps**: 7,000 - **Model Size**: Small ### **🎯 Training Details** - **Dataset**: Wikipedia passages from SQuAD dataset (~41k passages) - **Tokenization**: SentencePiece with 32k vocabulary - **Training Objective**: Next token prediction (causal language modeling) - **Optimizer**: AdamW with learning rate scheduling - **Hardware**: Trained on consumer GPU with gradient accumulation ### **📁 Model Files** ``` huggingface/ ├── config.json # Model configuration ├── generation_config.json # Generation parameters ├── pytorch_model.bin # Model weights (161MB) ├── tokenizer_config.json # Tokenizer configuration ├── tokenizer.model # SentencePiece tokenizer └── load_hf_model.py # Loading script ``` ## 🚀 Usage ### **Loading with Hugging Face Transformers** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and tokenizer model_name = "path/to/huggingface" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Generate text prompt = "The history of artificial intelligence" inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( inputs.input_ids, max_new_tokens=100, temperature=0.7, do_sample=True, pad_token_id=tokenizer.pad_token_id ) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_text) ``` ### **Using the Custom Loader** ```python from load_hf_model import load_openllm_model # Load the model using our custom loader model, tokenizer = load_openllm_model("path/to/huggingface") # Generate text prompt = "Explain quantum computing in simple terms" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate( inputs.input_ids, max_new_tokens=150, temperature=0.8, top_p=0.9 ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### **Inference Server** ```bash # Start the FastAPI inference server python core/src/inference_server.py \ --model_path exports/huggingface-7k/huggingface \ --port 8000 # Make API calls curl -X POST "http://localhost:8000/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "The future of renewable energy", "max_tokens": 100, "temperature": 0.7 }' ``` ## 📈 Performance ### **Training Metrics** - **Final Loss**: ~2.1 (cross-entropy) - **Training Time**: ~7 hours on consumer GPU - **Memory Usage**: ~2GB VRAM during training - **Inference Speed**: ~50 tokens/second on CPU, ~200 tokens/second on GPU ### **Model Capabilities** - **Text Generation**: Coherent paragraph generation - **Question Answering**: Basic factual responses - **Summarization**: Short text summarization - **Language Understanding**: Context-aware responses ## 🔧 Configuration ### **Generation Parameters** ```json { "max_length": 512, "max_new_tokens": 256, "temperature": 0.7, "top_k": 40, "top_p": 0.9, "do_sample": true, "pad_token_id": 0, "eos_token_id": 1, "bos_token_id": 2 } ``` ### **Model Architecture** ```json { "vocab_size": 32000, "n_layer": 6, "n_head": 8, "n_embd": 512, "block_size": 1024, "dropout": 0.1, "bias": true } ``` ## 🧪 Testing ### **Quick Test** ```python # Test the model with a simple prompt test_prompt = "Hello, how are you today?" inputs = tokenizer(test_prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( inputs.input_ids, max_new_tokens=20, temperature=0.7 ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f"Input: {test_prompt}") print(f"Output: {response}") ``` ## 📋 Limitations - **Context Length**: Limited to 1,024 tokens - **Training Data**: Only Wikipedia passages (limited domain) - **Model Size**: Small model with limited reasoning capabilities - **Bias**: May inherit biases from training data - **Factual Accuracy**: Not guaranteed for current events ## 🔄 Model Comparison | Model | Parameters | Training Steps | Context Length | Use Case | |-------|------------|----------------|----------------|----------| | Small 4K | 35.8M | 4,000 | 1,024 | Basic text generation | | Small 6K | 35.8M | 6,000 | 1,024 | Improved coherence | | **Small 7K** | **35.8M** | **7,000** | **1,024** | **Extended training** | ## 📄 License This model is dual-licensed: - **Open Source**: GNU General Public License v3.0 - **Commercial**: Commercial License (contact for details) See `LICENSE` and `docs/LICENSES.md` for full license information. ## 🤝 Contributing We welcome contributions to improve the model! Please see: - `docs/CONTRIBUTING.md` for contribution guidelines - `docs/CODE_OF_CONDUCT.md` for community standards ## 📞 Support For questions, issues, or commercial licensing: - **GitHub Issues**: Report bugs and feature requests - **Documentation**: Check `docs/` directory - **Commercial License**: Contact for enterprise use --- **Author**: Louis Chua Bean Chong **Project**: OpenLLM - Open Source Large Language Model **Version**: 0.1.0 **Last Updated**: 2024
ngohongthai/quicktest_mathpal
ngohongthai
2025-08-13T07:10:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3n", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:54:01Z
--- base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3n license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** ngohongthai - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
alexgeezy429/blockassist-bc-scented_coiled_antelope_1755067133
alexgeezy429
2025-08-13T07:10:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scented coiled antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:10:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scented coiled antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ajjyy/Qwen2-0.5B-PPO-gsm8k-attempt4
ajjyy
2025-08-13T07:09:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "ppo", "trl", "arxiv:1909.08593", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-13T03:43:33Z
--- base_model: Qwen/Qwen2-0.5B-Instruct library_name: transformers model_name: Qwen2-0.5B-PPO-gsm8k-attempt4 tags: - generated_from_trainer - ppo - trl licence: license --- # Model Card for Qwen2-0.5B-PPO-gsm8k-attempt4 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ajjyy/Qwen2-0.5B-PPO-gsm8k-attempt4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ajyang-massachusetts-institute-of-technology/gsm8k_ppo_2/runs/5bj9ljux) This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593). ### Framework versions - TRL: 0.20.0.dev0 - Transformers: 4.53.3 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.2 ## Citations Cite PPO as: ```bibtex @article{mziegler2019fine-tuning, title = {{Fine-Tuning Language Models from Human Preferences}}, author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving}, year = 2019, eprint = {arXiv:1909.08593} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts
AmanPriyanshu
2025-08-13T07:09:25Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "law", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T07:08:55Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - law - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Law GPT-OSS Model (13 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~9.6B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 13 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Law | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from law evaluation tasks - **Systematic Reduction**: Reduced from 32 to 13 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 40.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to law tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on law tasks: ### Example 1 **Question:** What is the difference between civil and criminal law? **Response:** ``` <|channel|>analysis<|message|>Need explain.<|end|><|start|>assistant<|channel|>final<|message|>**Civil law** and **criminal law** are two branches of the legal system that deal with very different kinds of disputes and wrongs. The main differences lie in the parties involved, the purpose of the law, the burden of proof, the remedies, and the procedural rules. | Feature | Civil Law | Criminal Law | |---|---|---| | **Parties** | Private individuals or entities (e.g., a person suing a neighbor for a property dispute, a company suing another for breach of contract). | The state (or society) versus an individual or entity accused of violating a statute (e.g., theft, assault). | | **Purpose** | To resolve disputes between parties and to compensate the wrong‑party (damers) for loss or injury. | To punish or deter wrongdoing that harms society, to protect public order, and to maintain social safety. | | **Burain of Proof** | “Pre‑beyond a reasonable doubt” is not required; the plaintiff must prove the claim **by a pre‑balance of evidence** (i.e., “more likely than less”). | The prosecution must prove the defendant’s guilt **beyond a reasonable doubt**. | | **Remade** | Typically monetary damages, injunctions, or specific performance (e.g., pay compensation, stop a nuisance). | Penative sanctions: fines, imprisonment, probation, community service, restitution, or other corrective measures. | | **Procedure** | Usually civil courts or tribunals; parties may choose to represent themselves or hire counsel; the process is often less formal and can be faster. | Criminal courts (often with a judge and jury); the defendant has constitutional rights (e‑e. right to counsel, right to a speedy trial, right to a presis). The process is more formal and can be longer. | | **Defender’s Rights** | The defendant (or “respondent”) has a right to a fair hearing, but the state does not have a special “prosecution” role. | The defendant has a right to a presis, to counsel, to cross‑examination, to a jury, and to appeal. | | **Outcome** | The court orders the defendant to pay or to do something; the plaintiff receives compensation. | The court orders the defendant to pay a fine, serve jail time, or perform community service. | | **Examples** | Contract disputes, property disputes, tort claims (neglig ``` --- ### Example 2 **Question:** Explain the concept of due process in legal systems. **Response:** ``` <|channel|>analysis<|message|>Need explain due process.<|end|><|start|>assistant<|channel|>final<|message|>**Due Process (or “Due‑Process”) – a Fundamental Legal Principle** --- ### 1. What it Is * **Due Process** is a constitutional or statutory safeguard that guarantees a fair, impartial, and transparent legal procedure before the state may deprim­ent an individual’s life, liberty, or property. * It is a *procedural* right, not a substantive right. It protects the *process* by which a government acts, not the outcome itself. --- ### 2. Core Elements | Element | Why It Matters | Typical Manifestation | |--------|--------------|------------------------| | **Notice** | The person must be informed of the claim or charge. | Written summons, court docket, public notice. | | **Opportunity to Respond** | The person must be able to present evidence, argue, and cross‑examine. | Oral or written pleadings, deposition, trial. | | **Neutral Decision Maker** | The decision must be made by an impartial judge or jury. | Ad‑un‑parte judge, jury trial. | | **Right to Counsel** | The person may have legal representation. | Public‑law attorney, private counsel. | | **Right to Appeal** | The decision can be reviewed by a higher court. | Appeal, habe‑de‑liba, appellate review. | | **Reasoned Decision** | The outcome must be based on evidence and law. | Written opinion, record of findings. | | **Timity** | The process must be timely, preventing indefinite detention or punishment. | Stat‑tim, bail, release. | --- ### 3. Historical Roots | Era | Key Development | |-----|----------------| | **17th–18th Century** | English Common Law – “no law without the consent of the people.” | | **19th Century** | U.S. Constitution (17th Amendment) – “no law shall be passed without the consent of the people.” | | **20th Century** | 4th & 5th Amendments (U.S.) – “No person shall be deprived of life, liberty, or property … without due process.” | | **International** | Universal Declaration of Human Rights (1945) – “Everyone has the right to a fair trial.” | --- ### 4. How It Works in Practice 1. **Arar‑diction** – The state must show a ``` --- ### Example 3 **Question:** What are the basic rights protected by the First Amendment? **Response:** ``` <|channel|>analysis<|message|>Need to answer: rights: freedom of religion, freedom of speech, press, assembly, petition, right to bear arms? Actually 1st Amendment: prohibits laws abrapping religion, speech, press, assembly, petition, and also right to bear arms. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment to the U.S. Constitution protects a handful of fundamental civil liberties that are considered essential to a free, democratic society. In short, it guarantees: | **Right** | **What it protects** | |---|---| | **Freedom of Religion** | The government may not establish a religion or prohibit the practice of any religion. | | **Freedom of Speech** | Individuals may express opinions, ideas, and viewpoints without government censorship or punishment. | | **Freedom of the Press** | The press (newspapers, radio, television, etc.) may publish information and critique the government. | | **Freedom of Assembly** | People may gather peacefully for protest, demonstration, or other purposes. | | **Freedom to Petition** | Citizens may request or demand action from the government. | | **Right to Bear Arms** | (as interpreted by the Supreme Court) Individuals may possess and carry firearms. | These rights are not absolute; the courts have carved out certain limits (e.g., restrictions on hate‑speech, defamation, obscus­ity, or threats of violence). But the First Amendment remains the cornerstone of American free‑speech, religious liberty, and democratic participation.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
cooldazou/blockassist-bc-wise_foxy_stingray_1755068821
cooldazou
2025-08-13T07:08:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wise foxy stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:08:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wise foxy stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts
AmanPriyanshu
2025-08-13T07:05:32Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "law", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T07:05:02Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - law - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Law GPT-OSS Model (11 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~8.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 11 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Law | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from law evaluation tasks - **Systematic Reduction**: Reduced from 32 to 11 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 34.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to law tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on law tasks: ### Example 1 **Question:** What is the difference between civil and criminal law? **Response:** ``` <|channel|>analysis<|message|>Need to explain difference: civil law is disputes between private parties, remedies like damages, injunctions; criminal law is offenses against state, punishments like fines, imprisonment, etc. Also mention jurisdiction, burden of proof, standard of proof, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Civil law vs. criminal law** | Feature | Civil law | Criminal law | |---|---|---| | **Purpose** | Resolve disputes between private parties (e.g., contracts, torts, property, family matters). | Protect society and the state by puning conduct that is deemed harmful or dangerous. | | **Parties** | Plaintiff (private individual or entity) vs. defendant (private individual or entity). | State (or government) vs. defendant (private individual or entity). | | **Rem remedy** | Monetary damages, injunctions, specific performance, or restitution. | Penative sanctions: fines, community service, probation, imprisonment, or death in extreme cases. | | **Bur burden of proof** | “Pre‑ponderal evidence” (balance of evidence). | “Beyond a reasonable doubt” (high‑standard). | | **Standard of proof** | “Pre‑ponderal evidence” – more likely than the other side. | “Beyond a reasonable doubt” – the evidence must be so strong that a reasonable person would be convinced. | | **Procedure** | Civil courts (e.g., civil courts, arbitration, mediation). | Criminal courts (e.g., criminal courts, jury trials). | | **Rights** | Plaintiff may sue for damages or inj. | Defendant has the right to a trial, counsel, and the presumption of innocence. | | **Outcome** | The court orders the defendant to pay, or to do something. | The court orders the defendant to pay a fine, serve community service, or be imprisoned. | **Key points** 1. **Who is suing?** *Civil*: a private person or entity. *Criminal*: the state (or the government) on behalf of society. 2. **What is being sought?** *Civil*: compensation or a specific action. *Criminal*: punishment for wrongdoing. 3. **Proof standard** *Civil*: “balance of evidence.” *Criminal*: “beyond a reasonable doubt.” 4. **Consequences** *Civil*: monetary damages, injunctions, or restitution. *Criminal*: fines, community service ``` --- ### Example 2 **Question:** Explain the concept of due process in legal systems. **Response:** ``` <|channel|>analysis<|message|>Need explain due process.<|end|><|start|>assistant<|channel|>final<|message|>**Due Process (or “Due‑Process”) – the legal right to a fair, lawful treatment under the law** --- ### 1. What is the concept? * **Due process** is a constitutional or statutory safeguard that guarantees that a person, or a group, may be deprived of life, liberty, or property only through a fair, impartial, and lawful procedure. * It is a *procedual* right, not a *sub‑right* to a specific outcome. The law must be followed, the rules must be applied, and the decision must be made by a competent, unbiased authority. --- ### 2. Core elements | Element | What it means | Why it matters | |---|---|---| | **Notice** | The person must be informed of the charges, claims, or actions against them. | Prevents surprise, allows preparation. | | **Opportunity to respond** | The person must be given a chance to present evidence, argue, and cross‑cross witnesses. | Ensures the decision is based on a full record. | | **Imse‑impartial decision maker** | The judge, magistrate, or other authority must be unbiased and independent. | Avoids arbitrary or politically motivated rulings. | | **Reasoned decision** | The outcome must be explained, with a basis in the law and facts. | Allows review, appeals, and accountability. | | **Right to appeal** | The person may challenge the decision in a higher court or tribunal. | Provides a safety net against errors. | | **Procedual fairness** | The process must be conducted in a timely, orderly manner, respecting the rights of all parties. | Prevents abuse of power or delays that cause harm. | --- ### 3. Why is it essential? * **Rule of law** – It embodies the principle that the state may not act arbitrarily. * **Protection of minorities** – In plural societies, it safeguards the rights of the vulnerable, minorities, and dissenters. * **Legitimacy of the state** – A state that respects due process is seen as just, credible, and trustworthy. * **International law** – Many treaties (e.g., the Universal Declaration of Human Rights, the Convention on the Rights of the Child) embed due‑process principles. --- ### 4. How it works in practice | Country | Typical procedure | Key ``` --- ### Example 3 **Question:** What are the basic rights protected by the First Amendment? **Response:** ``` <|channel|>analysis<|message|>Need to answer: rights: freedom from unreasonable searches, seizures, due due process, equal protection, freedom of speech, religion, press, assembly, petition, right to bear arms. Also right to privacy? Actually privacy not explicit. Provide list.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment is the first of the Bill of Rights and it protects a set of fundamental liberties that are considered essential to a free society. In the text of the amendment itself, the most commonly cited “rights” are: | **Freedom from unreasonable searches and seizures** | The government may not conduct a search or seizure of a person, property, or personal documents without a probable‑cause warrant. | | **Due‑process (fair‑law) rights** | The government must follow a fair legal process before deprimin­ing a person’s life, liberty, or property. | | **Equal‑protection** | No state may discriminate against a person on the basis of race, religion, or other protected characteristics. | | **Freedom of speech** | Individuals may express opinions, ideas, and viewpoints without government censorship. | | **Freedom of religion** | Individuals may practice, change, or abandon a religion without state interference. | | **Freedom of press** | The press may publish news, opinions, and other information without government censorship. | | **Freedom of assembly** | People may gather together for peaceful protest, demonstration, or other civic activities. | | **Freedom to petition** | Citizens may petition the government for red‑action or red‑change. | | **Right to bear arms** | The right to own and carry firearms (as interpreted by the Supreme Court). | These are the core liberties that the First Amendment guarantees. The amendment also contains a “no‑law” clause that protects the right to “no‑law” (i.e., the right to be free from a law that is unconstitutional). In practice, the First Amendment is the foundation for many other rights that are protected by the Constitution, such as the right to privacy, the right to a fair trial, and the right to due‑process.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755067630
Sayemahsjn
2025-08-13T07:05:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:04:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
anl139/diffusion_test_1
anl139
2025-08-13T07:04:17Z
0
0
lerobot
[ "lerobot", "safetensors", "diffusion", "robotics", "dataset:anl139/record-Pastery50", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-08-13T07:03:51Z
--- datasets: anl139/record-Pastery50 library_name: lerobot license: apache-2.0 model_name: diffusion pipeline_tag: robotics tags: - lerobot - diffusion - robotics --- # Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
AmanPriyanshu/gpt-oss-7.8b-specialized-law-pruned-moe-only-10-experts
AmanPriyanshu
2025-08-13T07:03:29Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "law", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T07:03:00Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - law - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Law GPT-OSS Model (10 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 10 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~7.8B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 10 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Law | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from law evaluation tasks - **Systematic Reduction**: Reduced from 32 to 10 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 31.2% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to law tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 10 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 10 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.8b-specialized-law-pruned-moe-only-10-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-law-pruned-moe-only-10-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.8b-specialized-law-pruned-moe-only-10-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-law-pruned-moe-only-10-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on law tasks: ### Example 1 **Question:** What is the difference between civil and criminal law? **Response:** ``` <|channel|>analysis<|message|>Need explain difference: civil law deals disputes between private parties, damages, contracts, torts; criminal law deals offenses against society, punishments, prosecution by state. Also procedural differences, burden of proof, standard of proof, rights, remedies. Provide examples.<|end|><|start|>assistant<|channel|>final<|message|>**Short‑answer** | **Civil law** | Dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑dis‑ ``` --- ### Example 2 **Question:** Explain the concept of due process in legal systems. **Response:** ``` <|channel|>analysis<|message|>Need explain due process concept.<|end|><|start|>assistant<|channel|>final<|message|>**Due process** (or *due process* in some jurisdictions) is a fundamental principle that guarantees that every person who is subject to legal action—whether criminal, civil, administrative, or otherwise—must be afforded a fair, impartial, and transparent legal procedure. In short, it means that the law must be applied *fairly* and *equently* to everyone, and that the courts and other authorities must follow a set of procedural rules that protect the rights of the parties involved. --- ## 1. What is “Due Process”? | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ``` --- ### Example 3 **Question:** What are the basic rights protected by the First Amendment? **Response:** ``` <|channel|>analysis<|message|>Need answer: First Amendment rights: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches, freedom from unreasonable searches? Actually First Amendment: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? Wait: First Amendment: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? Actually: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? Might list: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? But better: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? Actually: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? This is repetitive. Let's answer: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? Probably they want: freedom from slavery, freedom from unjust imprisonment, freedom from arbitrary searches, freedom from unreasonable searches, freedom from unreasonable seizures, freedom from unreasonable searches? I'll give concise.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment protects a set of fundamental freedoms that are the core of American liberty. In short, it guarantees: 1. **Freedom from slavery** – no person may be owned or forced to work against their will. 2. **Freedom from unjust imprisonment** – no person may be held in jail or other places of confinement without a lawful basis. 3. **Freedom from arbitrary searches** – no person may be searched or investigated without a reasonable suspicion or probable cause. 4. **Freedom from unreasonable searches** – no person may be subjected to intrusive investigations or surveillance without a proper legal justification. 5. **Freedom from unreasonable seizures** – no person may have property or personal belongings taken without a lawful warrant or due to a legitimate legal claim. These rights form the foundation of the rest of the Constitution and are the basic protections that the First Amendment guarantees.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755066752
michaelcpage345
2025-08-13T07:01:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature deadly anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:01:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature deadly anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-7.2b-specialized-law-pruned-moe-only-9-experts
AmanPriyanshu
2025-08-13T07:01:19Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "law", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T07:00:52Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - law - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Law GPT-OSS Model (9 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~7.2B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 9 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Law | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from law evaluation tasks - **Systematic Reduction**: Reduced from 32 to 9 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 28.1% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to law tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.2b-specialized-law-pruned-moe-only-9-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-law-pruned-moe-only-9-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.2b-specialized-law-pruned-moe-only-9-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-law-pruned-moe-only-9-experts") # Generate with the model messages = [ {"role": "user", "content": "What is the difference between civil and criminal law?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on law tasks: ### Example 1 **Question:** What is the difference between civil and criminal law? **Response:** ``` <|channel|>analysis<|message|>Need explain difference: civil law (disrights, contracts, torts) vs criminal law (state prosecution, punishment). Provide distinctions, sources, procedures, burden of proof, remedies.<|end|><|start|>assistant<|channel|>final<|message|>**Short‑sense** | **Civil law** – disputes between private parties (e.g., contracts, torts, property, family). | **Cr‑criminal law** – offenses against the state (public order) that are punishable by fines, imprisonment, or other sanctions. --- ## 1. What the *source* of the law is | **Civil** – the law is a *private* relationship. The parties are the plaintiff and the defendant. | **Cr‑criminal** – the law is a *public* relationship. The state (or the prosecutor) is the plaintiff; the defendant is the accused. --- ## 2. What the *burur* of proof is | **Civil** – *pre‑burur* “balance of the evidence” (the plaintiff must show that the defendant is liable). | **Cr‑criminal** – *post‑burur* “beyond the reasonable doubt” (the prosecutor must show that the defendant is guilty). --- ## 3. What the *rem‑type* of remedy is | **Civil** – *comp‑dam* (dam damages, injunctions, specific performance, restitution). | **Cr‑criminal** – *pun‑s* (fines, imprisonment, probation, community service, etc.). --- ## 4. What the *procedure* is | **Civil** – usually *civil‑court* (judge, jury optional). Parties present evidence, witnesses, pleadings. | **Cr‑criminal** – usually *criminal‑court* (judge, jury). The prosecution presents evidence; the defense may present evidence to rebut it. The judge or jury decides guilt. --- ## 5. What the *rights* of the parties are | **Civil** – the plaintiff may sue for damages, recover property, or obtain an injunction. | **Cr‑criminal** – the defendant has the right to a pres‑inn pres‑trial, right to counsel, right to a trial by jury, right to appeal. --- ## 6. What the *pen‑s* is | **Civil** – the plaintiff can recover money or other relief. | **Cr‑criminal** – the state can impose ``` --- ### Example 2 **Question:** Explain the concept of due process in legal systems. **Response:** ``` <|channel|>analysis<|message|>Need explain due process.<|end|><|start|>assistant<|channel|>final<|message|>### What is “Due‑process” (or “Due‑law”)? In modern legal systems the idea that *everyone has a right to be treated fairly* is the foundation of the rule of law. The term **“due‑process”** (or **“due‑law”)** is a shorthand for that principle. It is a doctrine that guarantees that a person, a state, or an institution may not be deprived of a property, a liberty, or a status without a fair, impartial, and transparent procedure. --- ## 1. The roots of the concept | **Historical** | The concept can be traced back to the Magna‑Paz‑de‑law (the Magna‑Paz‑de‑law, ``` --- ### Example 3 **Question:** What are the basic rights protected by the First Amendment? **Response:** ``` <|channel|>analysis<|message|>Need to answer: basic rights protected by First Amendment: freedom of speech, religion, press, assembly, etc. Also right to petition. Also right to be free from unreasonable searches? Actually 4th. So answer: freedom of speech, religion, press, assembly, petition. Also right to be free from unreasonable searches and seizures? No. So answer.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment is the Constitution’s first “right” to the people of the United United. It protects the following basic rights: 1. **Freedom of Speech** – the right to express opinions, ideas, and viewpoints without government interference or punishment. 2. **Freedom of Religion** – the right to practice, observe, or not practice a religion, or to be free from religious persecution. 3. **Freedom of the Press** – the right to publish or distribute information, news, or opinions. 4. **Freedom of Assembly** – the right to gather, protest, or meet with others. 5. **Freedom to Petition** – the right to request or ask the government for help or change. These are the core rights that the First Amendment protects.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
hobson123/blockassist-bc-mammalian_dense_gibbon_1755068089
hobson123
2025-08-13T07:00:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:00:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755066770
koloni
2025-08-13T06:59:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:59:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
logith190/chatgpt4
logith190
2025-08-13T06:59:31Z
0
0
transformers
[ "transformers", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T04:43:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755068236
IvanJAjebu
2025-08-13T06:58:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:58:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cpatonn/cogito-v2-preview-llama-109B-MoE-AWQ-4bit
cpatonn
2025-08-13T06:57:49Z
0
0
transformers
[ "transformers", "safetensors", "llama4", "image-text-to-text", "conversational", "base_model:deepcogito/cogito-v2-preview-llama-109B-MoE", "base_model:quantized:deepcogito/cogito-v2-preview-llama-109B-MoE", "license:llama4", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
image-text-to-text
2025-08-13T06:28:00Z
--- license: llama4 library_name: transformers base_model: - deepcogito/cogito-v2-preview-llama-109B-MoE --- <p align="center"> <img src="images/deep-cogito-logo.png" alt="Logo" width="40%"> </p> # Cogito v2 preview - 109B MoE [Blog Post](https://www.deepcogito.com/research/cogito-v2-preview) The Cogito v2 LLMs are instruction tuned generative models. All models are released under an open license for commercial use. - Cogito v2 models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models). - The LLMs are trained using **Iterated Distillation and Amplification (IDA)** - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement. - The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts. - In both standard and reasoning modes, Cogito v2-preview models outperform their size equivalent counterparts on common industry benchmarks. - This model is trained in over 30 languages and supports long contexts (upto 10M tokens). # Evaluations Here is the model performance on some standard industry benchmarks: <p align="left"> <img src="images/cogito-v2-109b-benchmarks.png" alt="Logo" width="90%"> </p> For detailed evaluations, please refer to the [Blog Post](https://www.deepcogito.com/research/cogito-v2-preview). # Usage Here is a snippet below for usage with Transformers: ```python import transformers import torch model_id = "deepcogito/cogito-v2-preview-llama-109B-MoE" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Give me a short introduction to LLMs."}, ] outputs = pipeline( messages, max_new_tokens=512, ) print(outputs[0]["generated_text"][-1]) ``` ## Implementing extended thinking - By default, the model will answer in the standard mode. - To enable thinking, you can do any one of the two methods: - Set `enable_thinking=True` while applying the chat template. - Add a specific system prompt, along with prefilling the response with "\<think\>\n". **NOTE: Unlike Cogito v1 models, we initiate the response with "\<think\>\n" at the beginning of every output when reasoning is enabled. This is because hybrid models can be brittle at times (<0.1% of the cases), and adding a "\<think\>\n" ensures that the model does indeed respect thinking.** ### Method 1 - Set enable_thinking=True in the tokenizer If you are using Huggingface tokenizers, then you can simply use add the argument `enable_thinking=True` to the tokenization (this option is added to the chat template). Here is an example - ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "deepcogito/cogito-v2-preview-llama-109B-MoE" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to LLMs." messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ### Method 2 - Add a specific system prompt, along with prefilling the response with "\<think\>\n". To enable thinking using this method, you need to do two parts - Step 1 - Simply use this in the system prompt `system_instruction = 'Enable deep thinking subroutine.'` If you already have a system_instruction, then use `system_instruction = 'Enable deep thinking subroutine.' + '\n\n' + system_instruction`. Step 2 - Prefil the response with the tokens `"<think>\n"`. Here is an example - ```python import transformers import torch model_name = "deepcogito/cogito-v2-preview-llama-109B-MoE" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) # Step 1 - Add deep thinking instruction. DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine." messages = [ {"role": "system", "content": DEEP_THINKING_INSTRUCTION}, {"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."}, ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) # Step 2 - Prefill response with "<think>\n". text += "<think>\n" # Now, continue as usual. model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` Similarly, if you have a system prompt, you can append the `DEEP_THINKING_INSTRUCTION` to the beginning in this way - ```python DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine." system_prompt = "Reply to each prompt with only the actual code - no explanations." prompt = "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format." messages = [ {"role": "system", "content": DEEP_THINKING_INSTRUCTION + '\n\n' + system_prompt}, {"role": "user", "content": prompt} ] ``` # Tool Calling Cogito models support tool calling (single, parallel, multiple and parallel_multiple) both in standard and extended thinking mode. Here is a snippet - ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] model_inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device) outputs = model.generate(**inputs, max_new_tokens=512) output_text = tokenizer.batch_decode(outputs)[0][len(text):] print(output_text) ``` This will result in the output - ``` <tool_call> {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} </tool_call><|eot|> ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat: ```python text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False) inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device) outputs = model.generate(**inputs, max_new_tokens=512) output_text = tokenizer.batch_decode(outputs)[0][len(text):] ``` This should result in the string - ``` 'The current temperature in Paris is 22.0 degrees.<|eot|>' ``` ## License This repository and the model weights are licensed under the [Llama 4 Community License Agreement](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) (Llama models' default license agreement). ## Contact If you would like to reach out to our team, send an email to [[email protected]]([email protected]).
myonlineworld05/blockassist-bc-swift_raging_swan_1755068158
myonlineworld05
2025-08-13T06:57:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "swift raging swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:56:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - swift raging swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koreanboy/my_summary_model
koreanboy
2025-08-13T06:56:02Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:55:47Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_summary_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7034 - Rouge1: 0.145 - Rouge2: 0.05 - Rougel: 0.1188 - Rougelsum: 0.1187 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.4617 | 1.0 | 62 | 3.0013 | 0.1262 | 0.0332 | 0.104 | 0.104 | 20.0 | | 3.0289 | 2.0 | 124 | 2.7830 | 0.1375 | 0.045 | 0.112 | 0.1123 | 20.0 | | 2.9931 | 3.0 | 186 | 2.7208 | 0.1439 | 0.0491 | 0.1174 | 0.1171 | 20.0 | | 2.9153 | 4.0 | 248 | 2.7034 | 0.145 | 0.05 | 0.1188 | 0.1187 | 20.0 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
mang3dd/blockassist-bc-tangled_slithering_alligator_1755066505
mang3dd
2025-08-13T06:55:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:55:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755066275
milliarderdol
2025-08-13T06:54:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring rough scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:54:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring rough scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ironman288/blockassist-bc-miniature_lanky_vulture_1755065195
Ironman288
2025-08-13T06:53:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature lanky vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:53:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature lanky vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
emaanbilal/qwen_7b_legal_lora
emaanbilal
2025-08-13T06:53:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-10T15:08:22Z
--- base_model: unsloth/Qwen2.5-7B-Instruct library_name: transformers model_name: qwen_7b_legal_lora tags: - generated_from_trainer - sft - unsloth - trl licence: license --- # Model Card for qwen_7b_legal_lora This model is a fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="emaanbilal/qwen_7b_legal_lora", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/emaanbilal-aag/lora-med-qwen-finetuning/runs/bgh43yqw) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.54.1 - Pytorch: 2.7.1+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
acidjp/blockassist-bc-pesty_extinct_prawn_1755067586
acidjp
2025-08-13T06:53:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:52:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amoogeona/taxi
amoogeona
2025-08-13T06:52:05Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-03-20T05:47:53Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="amoogeona/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
EleutherAI/deep-ignorance-e2e-strong-filter-cb
EleutherAI
2025-08-13T06:51:56Z
10
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T10:51:39Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite This model is presented in the paper [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601). Project Page: https://deepignorance.ai/ Code: https://github.com/EleutherAI/deep-ignorance We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://huggingface.co/papers/2508.06601). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:------|:---------------------|:-------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-pretraining-stage-strong-filter
EleutherAI
2025-08-13T06:51:51Z
489
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-06T17:03:00Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite This model suite is presented in the paper [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601). **Project Page**: https://deepignorance.ai/ **Code**: https://github.com/EleutherAI/deep-ignorance We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-unfiltered
EleutherAI
2025-08-13T06:51:48Z
189
1
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:38:20Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. This model is described in the paper: [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601). Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. **Project Page**: [https://deepignorance.ai/](https://deepignorance.ai/) **Code**: [https://github.com/EleutherAI/deep-ignorance](https://github.com/EleutherAI/deep-ignorance) > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted
EleutherAI
2025-08-13T06:51:38Z
39
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T02:59:34Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite [📄 Paper](https://huggingface.co/papers/2508.06601) | [🌐 Project Page](https://deepignorance.ai/) | [💻 GitHub Repository](https://github.com/EleutherAI/deep-ignorance) We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning ([see project page](https://deepignorance.ai)). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-e2e-strong-filter
EleutherAI
2025-08-13T06:50:58Z
72
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T04:32:40Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite **Paper**: [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601) **Project page**: https://deepignorance.ai/ **Code**: https://github.com/EleutherAI/deep-ignorance We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) print(tokenizer.decode(tokens[0])) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal
EleutherAI
2025-08-13T06:50:54Z
46
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:40:27Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics pipeline_tag: text-generation library_name: transformers --- # Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs This model suite was presented in the paper [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601). **Project Page**: [https://deepignorance.ai/](https://deepignorance.ai/) **Code**: [https://github.com/EleutherAI/deep-ignorance](https://github.com/EleutherAI/deep-ignorance) ## Abstract Open-weight AI systems offer unique benefits, including enhanced transparency, open research, and decentralized access. However, they are vulnerable to tampering attacks which can efficiently elicit harmful behaviors by modifying weights or activations. Currently, there is not yet a robust science of open-weight model risk management. Existing safety fine-tuning methods and other post-training techniques have struggled to make LLMs resistant to more than a few dozen steps of adversarial fine-tuning. In this paper, we investigate whether filtering text about dual-use topics from training data can prevent unwanted capabilities and serve as a more tamper-resistant safeguard. We introduce a multi-stage pipeline for scalable data filtering and show that it offers a tractable and effective method for minimizing biothreat proxy knowledge in LLMs. We pretrain multiple 6.9B-parameter models from scratch and find that they exhibit substantial resistance to adversarial fine-tuning attacks on up to 10,000 steps and 300M tokens of biothreat-related text -- outperforming existing post-training baselines by over an order of magnitude -- with no observed degradation to unrelated capabilities. However, while filtered models lack internalized dangerous knowledge, we find that they can still leverage such information when it is provided in context (e.g., via search tool augmentation), demonstrating a need for a defense-in-depth approach. Overall, these findings help to establish pretraining data curation as a promising layer of defense for open-weight AI systems. --- We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-pretraining-stage-unfiltered
EleutherAI
2025-08-13T06:50:51Z
9
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-06T16:15:47Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. **Paper:** [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601) **Project Page:** [https://deepignorance.ai/](https://deepignorance.ai/) **Code:** [https://github.com/EleutherAI/deep-ignorance](https://github.com/EleutherAI/deep-ignorance) In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) print(tokenizer.decode(tokens[0])) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ```bibtex @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal
EleutherAI
2025-08-13T06:50:04Z
15
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-28T07:19:29Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. ## Paper, Project Page, and Code * **Paper:** [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601) * **Project Page:** https://deepignorance.ai/ * **GitHub Repository:** https://github.com/EleutherAI/deep-ignorance > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted
EleutherAI
2025-08-13T06:50:00Z
20
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "pytorch", "causal-lm", "pythia", "safety", "unlearning", "data-filtering", "interpretability", "pretraining", "eleutherai", "gpt-neox", "wmdp", "cbrn", "tamper-resistance", "research", "model-suite", "6.9b", "circuit-breaking", "knowledge-filtering", "open-weight", "biothreat", "safety-research", "model-diffing", "training-dynamics", "en", "dataset:EleutherAI/deep-ignorance-pretraining-mix", "dataset:EleutherAI/deep-ignorance-annealing-mix", "arxiv:2508.06601", "base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T20:09:29Z
--- base_model: - EleutherAI/deep-ignorance-pretraining-stage-unfiltered datasets: - EleutherAI/deep-ignorance-pretraining-mix - EleutherAI/deep-ignorance-annealing-mix language: - en license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - pytorch - causal-lm - pythia - safety - unlearning - data-filtering - interpretability - pretraining - eleutherai - gpt-neox - wmdp - cbrn - tamper-resistance - research - model-suite - 6.9b - circuit-breaking - knowledge-filtering - open-weight - biothreat - safety-research - model-diffing - training-dynamics --- # Deep Ignorance Model Suite We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. This model suite is described in the paper [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601). **Paper Abstract:** Open-weight AI systems offer unique benefits, including enhanced transparency, open research, and decentralized access. However, they are vulnerable to tampering attacks which can efficiently elicit harmful behaviors by modifying weights or activations. Currently, there is not yet a robust science of open-weight model risk management. Existing safety fine-tuning methods and other post-training techniques have struggled to make LLMs resistant to more than a few dozen steps of adversarial fine-tuning. In this paper, we investigate whether filtering text about dual-use topics from training data can prevent unwanted capabilities and serve as a more tamper-resistant safeguard. We introduce a multi-stage pipeline for scalable data filtering and show that it offers a tractable and effective method for minimizing biothreat proxy knowledge in LLMs. We pretrain multiple 6.9B-parameter models from scratch and find that they exhibit substantial resistance to adversarial fine-tuning attacks on up to 10,000 steps and 300M tokens of biothreat-related text -- outperforming existing post-training baselines by over an order of magnitude -- with no observed degradation to unrelated capabilities. However, while filtered models lack internalized dangerous knowledge, we find that they can still leverage such information when it is provided in context (e.g., via search tool augmentation), demonstrating a need for a defense-in-depth approach. Overall, these findings help to establish pretraining data curation as a promising layer of defense for open-weight AI systems. **Project Page:** [https://deepignorance.ai/](https://deepignorance.ai/) **GitHub Repository:** [https://github.com/EleutherAI/deep-ignorance](https://github.com/EleutherAI/deep-ignorance) Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. > **Support:** > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > **Note:** > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. ## Research Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. ## Uses and Limitations ### Quickstart We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal", revision="global_step11921", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. ### Full Model List | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | **Unfiltered Baseline Models** | | | | | [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - | | [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking | | [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training | | **Pretraining-Stage Only Models** | | | | | [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - | | [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - | | [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - | | [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - | | **End-to-End Filtered Models** | | | | | [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - | | [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - | | [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking | | [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - | | [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking | | [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | ### Intended Use Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. ### Out-of-scope use The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). ## Training All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. ### Training data **[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. **[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. ## Evaluations We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. ### Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: **WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. **WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. ### General Capability Benchmarks To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: <!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention - **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge --> - **MMLU**: Factual knowledge across diverse topics - **PIQA**: Physical commonsense reasoning tasks - **LAMBADA**: Text comprehension requiring full-context understanding - **HellaSwag**: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | # Acknowledgments This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites. # Citation ``` @article{obrien2025deepignorance, title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs}, author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella}, journal={arXiv preprint arXiv:2508.06601}, year={2025} } ```
lur601/xlm-roberta-base-finetuned-panx-en
lur601
2025-08-13T06:45:31Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T06:39:59Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3875 - F1: 0.7035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0308 | 1.0 | 50 | 0.4977 | 0.5765 | | 0.4871 | 2.0 | 100 | 0.3848 | 0.6805 | | 0.363 | 3.0 | 150 | 0.3875 | 0.7035 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
nice2mitya/a_706700109
nice2mitya
2025-08-13T06:45:20Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-13T06:17:49Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
prithivMLmods/RewardAnything-8B-v1-f32-GGUF
prithivMLmods
2025-08-13T06:44:36Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "text-generation", "en", "base_model:WisdomShell/RewardAnything-8B-v1", "base_model:quantized:WisdomShell/RewardAnything-8B-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-12T04:22:23Z
--- license: apache-2.0 base_model: - WisdomShell/RewardAnything-8B-v1 language: - en pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference --- # **RewardAnything-8B-v1-f32-GGUF** > RewardAnything-8B-v1 is a generalizable, principle-following reward model with 8B parameters based on Qwen3-8B, designed to interpret and apply natural language principles directly at inference time for dynamic adaptation across diverse evaluation criteria without retraining; it achieves state-of-the-art results on RM-Bench and RABench, demonstrates strong generalization to new, unseen reward principles, supports transparent reasoning to explain its decisions, works efficiently with standard RLHF (PPO, GRPO) pipelines, and offers flexible deployment for local use, batch inference, or direct Hugging Face integration, all under the Apache 2.0 license for both research and production-scale applications. ## Execute using Ollama run -> `ollama run hf.co/prithivMLmods/RewardAnything-8B-v1-f32-GGUF:BF16` ## Model Files | File Name | Quant Type | File Size | | - | - | - | | RewardAnything-8B-v1.BF16.gguf | BF16 | 16.4 GB | | RewardAnything-8B-v1.F16.gguf | F16 | 16.4 GB | | RewardAnything-8B-v1.F32.gguf | F32 | 32.8 GB | ## Quants Usage (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
AmanPriyanshu/gpt-oss-20.9b-specialized-health_or_medicine-pruned-moe-only-32-experts
AmanPriyanshu
2025-08-13T06:44:34Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:43:37Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (32 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 32 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~20.9B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 32 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 32 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 100.0% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 32 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 32 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-20.9b-specialized-health_or_medicine-pruned-moe-only-32-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-health_or_medicine-pruned-moe-only-32-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-20.9b-specialized-health_or_medicine-pruned-moe-only-32-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-health_or_medicine-pruned-moe-only-32-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs several essential functions to keep the circulatory system working efficiently. Its main functions can be grouped into three broad categories: | Function | What it does | Why it matters | |----------|--------------|----------------| | **Pumping blood** | The heart contracts (systole) and relaxes (diastole) in a coordinated rhythm, propelling blood through the body. | Keeps oxygen, nutrients, hormones, and waste products moving to and from tissues. | | **Maintaining blood pressure** | By generating enough force during systole, the heart sets the pressure that drives blood through arteries, capillaries, and veins. | Adequate pressure is needed for capillary exchange and to ensure blood reaches all organs, especially the brain and kidneys. | | **Regulating blood flow** | The heart’s rhythm and contractility can be adjusted by the autonomic nervous system and hormonal signals (e.g., adrenaline). | Allows the body to respond to changes in activity level, stress, or metabolic demand. | ### How the heart accomplishes these tasks 1. **Electrical conduction system** - **Sinoatrial (SA) node**: “pacemaker” that initiates the heartbeat. - **Atrioventricular (AV) node**: delays the impulse so the atria finish contracting before the ventricles. - **Bundle of His → Purkinje fibers**: conduct the impulse rapidly to the ventricles, ensuring a coordinated contraction. 2. **Cardiac cycle** - **Diastole**: atria and ventricles relax; the atrioventricular valves (mitral and tricuspid) open, allowing blood to fill the ventricles. - **Systole**: ventricles contract; the semilunar valves (aortic and pulmonary) open, ejecting blood into the aorta and pulmonary artery. 3. **Pressure generation** - **Left ventricle**: generates high pressure (~120 mm Hg systolic) to push blood into the systemic circulation. - **Right ventricle**: generates lower pressure (~25 mm Hg systolic) to send blood to the lungs. 4. **Autoregulation** - **Frank–Starling mechanism**: increased venous return stretches ventricular walls, leading to a stronger contraction. - **Neural ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the big differences** | Feature | Bacteria | Viruses | |---------|----------|---------| | **Basic nature** | Living, single‑cell organisms (prokaryotes). | Non‑living particles; need a host cell to replicate. | | **Size** | ~0.5–5 µm (visible under light microscope). | ~20–300 nm (visible only with electron microscope). | | **Structure** | Cell wall, cytoplasm, ribosomes, DNA (circular or linear), sometimes plasmids. | Protein coat (capsid) surrounding a nucleic acid core (DNA or RNA). No metabolic machinery. | | **Reproduction** | Binary fission (autonomous division). | Hijack host cell’s replication machinery to produce new virions. | | **Metabolism** | Can be autotrophic or heterotrophic; can generate energy (e.g., via respiration, photosynthesis). | No metabolism of their own; rely entirely on host cell. | | **Genetic material** | DNA (sometimes RNA). | DNA or RNA, but never both. | | **Cellular components** | Membrane, ribosomes, enzymes, transporters. | Only capsid proteins and sometimes an envelope derived from host membrane. | | **Response to antibiotics** | Many antibiotics target cell wall synthesis, protein synthesis, or DNA replication. | Antibiotics are ineffective; antiviral drugs target specific viral enzymes or entry steps. | | **Disease mechanisms** | Can cause disease by toxin production, invasion, or immune activation. | Cause disease by destroying host cells, evading immunity, or triggering immune responses. | | **Treatment** | Antibiotics, vaccines, phage therapy. | Antiviral drugs, vaccines, immune‑modulating therapies. | | **Evolution** | Reproduce by mutation and horizontal gene transfer. | Rapid evolution via mutation, recombination, and reassortment. | **Bottom line:** Bacteria are autonomous, living cells that can survive and reproduce on their own, whereas viruses are essentially genetic material wrapped in a protein coat that can only replicate by commandeering a host cell’s machinery. This fundamental difference shapes how we diagnose, treat, and prevent infections caused by each.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer symptoms and causes.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other less common forms (gestational, monogenic, etc.). Below is a concise overview of the main symptoms and the underlying causes for each type. --- ## 1. Symptoms of Diabetes | Symptom | Typical Onset | Why It Happens | |---------|---------------|----------------| | **Polyuria** (frequent urination) | Early | High blood glucose spills into urine, pulling water with it (osmotic diuresis). | | **Polydipsia** (excessive thirst) | Early | Loss of fluid through urine leads to dehydration, stimulating thirst. | | **Polyphagia** (increased hunger) | Early | Cells cannot use glucose, so the body signals for more food. | | **Weight loss** (especially in type 1) | Early | Body breaks down fat and muscle for energy when glucose can’t enter cells. | | **Fatigue / Weakness** | Early | Energy production is impaired. | | **Blurred vision** | Early | Hyperglycemia causes fluid shifts in the lens, altering its shape. | | **Slow‑healing cuts or infections** | Early | High glucose impairs immune function and wound healing. | | **Numbness or tingling in extremities** | Later | Chronic hyperglycemia damages nerves (diabetic neuropathy). | | **Red, swollen, or painful gums** | Later | Diabetes worsens gum disease. | | **Recurrent urinary tract or genital infections** | Later | High glucose in urine or vaginal secretions promotes bacterial growth. | | **Night sweats, dizziness, or fainting** | Variable | Fluctuations in blood sugar or dehydration. | > **Note:** In type 1 diabetes, symptoms often appear abruptly over days to weeks. In type 2, symptoms can develop gradually over months or years and may be subtle or absent until complications arise. --- ## 2. Causes of Diabetes ### A. Type 1 Diabetes (T1D) | Cause | Mechanism | |-------|-----------| | **Autoimmune destruction of β‑cells** | The immune system mistakenly attacks insulin‑producing β‑cells in the pancreas. | | **Genetic predisposition** | Certain HLA ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
NanEi/sealion_merge_bot_v3
NanEi
2025-08-13T06:44:00Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T06:40:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lur601/xlm_roberta-base-finetuned-paxn-en
lur601
2025-08-13T06:43:26Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T06:42:06Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm_roberta-base-finetuned-paxn-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm_roberta-base-finetuned-paxn-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4006 - F1: 0.6696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0761 | 1.0 | 50 | 0.5513 | 0.5613 | | 0.5184 | 2.0 | 100 | 0.4225 | 0.6514 | | 0.3981 | 3.0 | 150 | 0.4006 | 0.6696 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
lur601/xlm_roberta-base-finetuned-paxn-it
lur601
2025-08-13T06:42:03Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T06:40:31Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm_roberta-base-finetuned-paxn-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm_roberta-base-finetuned-paxn-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2648 - F1: 0.8181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7205 | 1.0 | 70 | 0.3203 | 0.7290 | | 0.2731 | 2.0 | 140 | 0.2584 | 0.8194 | | 0.1863 | 3.0 | 210 | 0.2648 | 0.8181 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
nuguri01/my_summary_model
nuguri01
2025-08-13T06:42:01Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:41:34Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer model-index: - name: my_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_summary_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 18 - eval_batch_size: 18 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 36 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
lobbylob/blockassist-bc-placid_soft_ant_1755065175
lobbylob
2025-08-13T06:41:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid soft ant", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:41:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid soft ant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama8b-er-afg-v16-seed2-mcdonald
giovannidemuri
2025-08-13T06:40:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T04:16:59Z
--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - generated_from_trainer model-index: - name: llama8b-er-afg-v16-seed2-mcdonald results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama8b-er-afg-v16-seed2-mcdonald This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.0
lur601/xlm-roberta-base-finetuned-panx-it
lur601
2025-08-13T06:39:53Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T06:34:28Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2465 - F1: 0.8298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7606 | 1.0 | 70 | 0.3201 | 0.7487 | | 0.2895 | 2.0 | 140 | 0.2722 | 0.7857 | | 0.1834 | 3.0 | 210 | 0.2465 | 0.8298 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755067039
IvanJAjebu
2025-08-13T06:38:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:38:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-19.7b-specialized-health_or_medicine-pruned-moe-only-30-experts
AmanPriyanshu
2025-08-13T06:38:29Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:37:34Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (30 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 30 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~19.7B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 30 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 30 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 93.8% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 30 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 30 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-19.7b-specialized-health_or_medicine-pruned-moe-only-30-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-health_or_medicine-pruned-moe-only-30-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-19.7b-specialized-health_or_medicine-pruned-moe-only-30-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.7b-specialized-health_or_medicine-pruned-moe-only-30-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs several essential functions to keep the circulatory system working efficiently. Its main functions can be grouped into three interrelated categories: | Function | What it does | Why it matters | |----------|--------------|----------------| | **Pumping (circulation)** | • Generates rhythmic contractions (systole) that force blood out of the heart.<br>• Relaxation (diastole) allows the chambers to fill with blood. | Keeps blood moving through the body, delivering oxygen, nutrients, and hormones while removing waste products. | | **Pressure regulation** | • Maintains arterial pressure (systolic) and venous pressure (diastolic) through coordinated contraction and relaxation.<br>• Works with the autonomic nervous system and hormonal signals (e.g., adrenaline, angiotensin II). | Ensures that blood reaches tissues at the right pressure; prevents damage to capillaries and organs. | | **Blood distribution (circulatory routing)** | • Divides blood into two main circuits:<br> – **Pulmonary circuit**: de‑oxygenated blood goes to the lungs for gas exchange.<br> – **Systemic circuit**: oxygenated blood is delivered to the rest of the body.<br>• Uses valves (tricuspid, pulmonary, mitral, aortic) to keep flow unidirectional. | Guarantees that oxygen‑rich blood reaches every cell and that carbon dioxide and metabolic waste are carried back to the lungs and kidneys for elimination. | ### How the heart accomplishes these functions 1. **Electrical conduction system** - **Sinoatrial (SA) node**: “pacemaker” that initiates the heartbeat. - **Atrioventricular (AV) node, Bundle of His, Purkinje fibers**: conduct the impulse to the ventricles, coordinating contraction. 2. **Cardiac muscle (myocardium)** - Contractile fibers generate force; their contraction is powered by ATP from oxygenated blood. 3. **Valves and chambers** - **Right atrium → right ventricle → pulmonary artery → lungs → left atrium → left ventricle → aorta → body**. - Valves prevent back‑flow, ensuring efficient forward movement. 4. **Autonomic regulation** - Sympathetic stimulation increases heart rate and contractility (fight‑or ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the big differences** | Feature | Bacteria | Viruses | |---------|----------|---------| | **Basic nature** | Living, single‑cell organisms (prokaryotes). | Non‑living particles; need a host cell to replicate. | | **Size** | ~0.5–5 µm (visible under light microscope). | ~20–300 nm (visible only with electron microscope). | | **Structure** | Cell wall, cell membrane, cytoplasm, ribosomes, DNA (circular or linear). | Protein coat (capsid) surrounding a nucleic acid core (DNA or RNA). No metabolic machinery. | | **Reproduction** | Binary fission (self‑replication). | Hijack host cell’s replication machinery to produce new virions. | | **Metabolism** | Can grow on nutrients, produce energy, respire, etc. | No metabolism; rely entirely on host cell. | | **Genetic material** | DNA (sometimes plasmids). | DNA or RNA, single or double stranded. | | **Response to antibiotics** | Many antibiotics target bacterial cell wall, protein synthesis, etc. | Antibiotics are ineffective; antiviral drugs target specific viral enzymes or entry steps. | | **Disease mechanisms** | Direct damage, toxin production, immune activation. | Direct cell lysis, immune evasion, integration into host genome (retroviruses). | | **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Mycobacterium tuberculosis*. | Influenza virus, HIV, SARS‑CoV‑2, HPV. | **Bottom line:** Bacteria are autonomous, living cells that can grow and reproduce on their own, whereas viruses are inert genetic packages that must invade a host cell to replicate. This fundamental difference shapes how we treat infections: antibiotics for bacteria, antivirals or vaccines for viruses.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other less common forms (gestational, MODY, etc.). Below is a concise overview of the main symptoms, causes, and some key differences between the types. --- ## 1. Symptoms (Signs) | Symptom | Typical Onset | Why It Happens | |---------|---------------|----------------| | **Polyuria** (frequent urination) | Early | Kidneys excrete excess glucose, pulling water with it. | | **Polydipsia** (excessive thirst) | Early | Dehydration from fluid loss. | | **Polyphagia** (increased hunger) | Early | Cells can’t use glucose, so the body signals for more food. | | **Weight loss** (especially in type 1) | Early | Body breaks down fat and muscle for energy. | | **Fatigue / Weakness** | Early | Energy production is impaired. | | **Blurred vision** | Early | Hyperglycemia causes fluid shifts in the lens. | | **Slow‑healing cuts or infections** | Early‑mid | Poor blood flow and immune function. | | **Numbness or tingling in extremities** | Mid‑to‑late | Peripheral neuropathy from chronic high glucose. | | **Skin changes (dry, itchy, fungal infections)** | Mid‑to‑late | Hyperglycemia impairs skin barrier and immune response. | | **Recurrent urinary tract or genital infections** | Mid‑to‑late | Glucose in urine provides a food source for bacteria/fungi. | | **Eye problems (retinopathy, cataracts, glaucoma)** | Late | Chronic damage to retinal vessels and lens. | | **Kidney problems (proteinuria, decreased GFR)** | Late | Glomerular damage from prolonged hyperglycemia. | | **Cardiovascular disease (angina, heart attack, stroke)** | Late | Accelerated atherosclerosis. | > **Tip:** In type 1 diabetes, symptoms often appear rapidly (days to weeks). In type 2, they can develop insidiously over months or years, sometimes going unnoticed until complications arise. --- ## 2. Causes & Risk Factors | Category | Key Points | |----------|------------| | **Type ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
AmanPriyanshu/gpt-oss-19.1b-specialized-health_or_medicine-pruned-moe-only-29-experts
AmanPriyanshu
2025-08-13T06:35:27Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:34:36Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (29 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 29 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~19.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 29 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 29 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 90.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 29 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 29 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-19.1b-specialized-health_or_medicine-pruned-moe-only-29-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.1b-specialized-health_or_medicine-pruned-moe-only-29-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-19.1b-specialized-health_or_medicine-pruned-moe-only-29-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-19.1b-specialized-health_or_medicine-pruned-moe-only-29-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs several essential functions to keep the body’s circulatory system running smoothly. Its main functions can be grouped into three broad categories: | Function | What it does | Why it matters | |----------|--------------|----------------| | **Pumping blood** | The heart contracts (systole) and relaxes (diastole) to move blood through the body. | Keeps oxygen, nutrients, hormones, and waste products circulating. | | **Maintaining blood pressure** | By generating enough force during systole, the heart sets the pressure that drives blood through arteries and capillaries. | Ensures tissues receive adequate perfusion and that blood returns to the heart. | | **Regulating blood flow** | The heart’s rhythm and contractility can be adjusted by the autonomic nervous system and hormonal signals. | Allows the body to respond to changes in activity, stress, or metabolic demand. | ### 1. Pumping Blood - **Four chambers**: Two atria (upper chambers) receive blood; two ventricles (lower chambers) pump it out. - **Right side**: Receives de‑oxygenated blood from the body and pumps it to the lungs (pulmonary circulation). - **Left side**: Receives oxygenated blood from the lungs and pumps it to the rest of the body (systemic circulation). ### 2. Maintaining Blood Pressure - **Systolic pressure**: The peak pressure during ventricular contraction; drives blood into arteries. - **Diastolic pressure**: The pressure when the heart relaxes; allows coronary arteries to fill and maintains perfusion of tissues. - **Pulse pressure**: The difference between systolic and diastolic pressures; reflects arterial compliance and cardiac output. ### 3. Regulating Blood Flow - **Heart rate**: Controlled by the sinoatrial (SA) node, autonomic nervous system, and hormones (e.g., adrenaline). - **Stroke volume**: The amount of blood ejected per beat; influenced by preload (venous return), afterload (vascular resistance), and contractility. - **Cardiac output**: Heart rate × stroke volume; the total volume of blood pumped per minute. --- #### How These Functions Work Together 1. **Blood enters the right atrium** → passes to the right ventricle → pumped to the lungs. 2. **Oxygenated blood returns to the left atr ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |---------|----------|---------| | **Cellular status** | True cells – prokaryotic, with a cell membrane, cytoplasm, ribosomes, and a cell wall (often peptidoglycan). | Not cells – just a protein coat (capsid) surrounding genetic material (DNA or RNA). No cellular machinery. | | **Size** | 0.5–5 µm (visible under light microscope). | 20–300 nm (visible only with electron microscopy). | | **Reproduction** | Autonomous binary fission; can grow and divide on their own. | Must hijack a host cell’s machinery to replicate; no independent growth. | | **Genetic material** | Usually circular or linear DNA; some have plasmids. | Either DNA or RNA, single‑ or double‑stranded, linear or circular. | | **Metabolism** | Can be autotrophic (photosynthetic or chemosynthetic) or heterotrophic; many have diverse metabolic pathways. | No metabolism; rely entirely on host cell processes. | | **Structure** | Cell wall, cytoplasmic membrane, ribosomes, sometimes flagella or pili. | Capsid (protein shell), sometimes an envelope derived from host membrane, and a nucleic acid core. | | **Life cycle** | Can form colonies, biofilms, and spores (in some species). | Infects a single cell, replicates, assembles new virions, then exits (lysis or budding). | | **Treatment** | Antibiotics target bacterial cell wall, protein synthesis, DNA replication, etc. | Antivirals target viral enzymes or entry/exit steps; antibiotics are ineffective. | | **Role in disease** | Can cause infections (e.g., pneumonia, urinary tract infection) but many are harmless or beneficial. | Cause diseases by destroying host cells (e.g., influenza, HIV, COVID‑19). | | **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Mycobacterium tuberculosis*. | Influenza virus, HIV, SARS‑CoV‑2, HPV. | **Bottom line:** Bacteria are living, self‑sufficient cells that can grow and divide on their own. Viruses are sub‑cellular particles that need a host cell to replicate; they are ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other forms (gestational, monogenic, etc.). Below is a concise overview of the main symptoms, causes, and some key differences between the types. --- ## 1. Symptoms | Symptom | Typical Onset | Why It Happens | |---------|---------------|----------------| | **Polyuria** (frequent urination) | Often early | Kidneys excrete excess glucose, pulling water with it. | | **Polydipsia** (excessive thirst) | Early | Dehydration from fluid loss. | | **Polyphagia** (increased hunger) | Early | Cells can’t use glucose, so the body signals for more food. | | **Weight loss** (especially in type 1) | Early | Body breaks down fat and muscle for energy. | | **Fatigue / weakness** | Early | Energy production is impaired. | | **Blurred vision** | Early | Hyperglycemia causes fluid shifts in the lens. | | **Slow‑healing cuts or infections** | Early‑mid | Poor blood flow and immune function. | | **Numbness or tingling in extremities** | Mid‑to‑late | Peripheral neuropathy from chronic high glucose. | | **Red, swollen, or painful gums** | Mid‑to‑late | Poor wound healing and gum disease. | | **Recurrent urinary tract or vaginal infections** | Mid‑to‑late | High glucose in urine/glands promotes bacterial growth. | | **Skin changes (e.g., dark patches, itching)** | Late | Advanced disease or associated conditions. | > **Tip:** In type 2 diabetes, many people are asymptomatic for years; routine screening (fasting glucose, HbA1c) is essential if risk factors are present. --- ## 2. Causes & Risk Factors | Type | Primary Cause | Key Risk Factors | |------|---------------|------------------| | **Type 1 Diabetes** | Autoimmune destruction of pancreatic β‑cells → insulin deficiency | • Genetic predisposition (HLA genes) <br>• Viral infections (e.g., enteroviruses) <br>• Early childhood onset <br>• Low vitamin D (possible link) | | **Type 2 Diabetes ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
pfpguy/blockassist-bc-scruffy_sleek_locust_1755066639
pfpguy
2025-08-13T06:34:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy sleek locust", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:31:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy sleek locust --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lur601/xlm-roberta-base-finetuned-panx-fr
lur601
2025-08-13T06:34:20Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T06:26:56Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2779 - F1: 0.8411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5637 | 1.0 | 191 | 0.3215 | 0.7837 | | 0.2667 | 2.0 | 382 | 0.2779 | 0.8297 | | 0.182 | 3.0 | 573 | 0.2779 | 0.8411 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
AmanPriyanshu/gpt-oss-17.9b-specialized-health_or_medicine-pruned-moe-only-27-experts
AmanPriyanshu
2025-08-13T06:29:35Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:28:43Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (27 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 27 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~17.9B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 27 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 27 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 84.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 27 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 27 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-17.9b-specialized-health_or_medicine-pruned-moe-only-27-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-health_or_medicine-pruned-moe-only-27-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-17.9b-specialized-health_or_medicine-pruned-moe-only-27-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.9b-specialized-health_or_medicine-pruned-moe-only-27-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs three core functions: | Function | What it does | Why it matters | |----------|--------------|----------------| | **Pumping blood** | Contracts and relaxes rhythmically to move blood through the circulatory system. | Delivers oxygen, nutrients, hormones, and removes waste products to every cell. | | **Maintaining blood pressure** | Generates enough force to keep blood moving against resistance in arteries and veins. | Keeps tissues perfused and allows the body to respond to changes in activity or posture. | | **Regulating blood flow** | Adjusts rate and force of contraction (cardiac output) and controls valve opening/closing. | Matches oxygen delivery to metabolic demand (e.g., during exercise vs rest). | Together, these functions keep the body’s cells supplied with the substances they need and remove metabolic waste, sustaining life.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |--------|----------|---------| | **Cellular status** | True cells – prokaryotic, with a cell membrane, cytoplasm, ribosomes, and a cell wall (often peptidoglycan). | Non‑cellular particles – no membrane, no cytoplasm, no ribosomes. | | **Size** | 0.5–5 µm (visible under light microscope). | 20–300 nm (visible only with electron microscopy). | | **Genetic material** | DNA or RNA, usually a single circular chromosome; some have plasmids. | DNA or RNA, single or double stranded, but never both. | | **Reproduction** | Binary fission (autonomous, self‑replicating). | Must hijack a host cell’s machinery to replicate. | | **Metabolism** | Can be autotrophic or heterotrophic; many can generate energy (e.g., via respiration, photosynthesis). | No metabolism of their own; rely entirely on host cell enzymes. | | **Structure** | Cell wall, cytoplasmic membrane, ribosomes, sometimes flagella or pili. | Protein coat (capsid) surrounding the genome; some have an outer lipid envelope. | | **Life cycle** | Can grow, divide, and form colonies. | Infect a host cell, replicate, then exit (lysis or budding). | | **Treatment** | Antibiotics (target cell wall, protein synthesis, DNA replication, etc.). | Antivirals (target viral enzymes or entry/exit steps); antibiotics ineffective. | | **Disease examples** | *Streptococcus pneumoniae* (pneumonia), *Escherichia coli* (UTI). | Influenza virus, HIV, SARS‑CoV‑2. | | **Transmission** | Often via contact, food, water, vectors. | Often via aerosols, bodily fluids, or contaminated surfaces. | **Bottom line:** Bacteria are living, self‑sufficient cells that can grow and reproduce on their own. Viruses are obligate parasites that need a host cell to replicate; they are essentially genetic material wrapped in a protein coat and have no metabolic machinery of their own.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types—Type 1 and Type 2—have distinct causes, but many symptoms overlap because they all stem from elevated glucose levels and impaired insulin action or production. --- ## 1. Symptoms | Symptom | Why it Happens | Typical Onset | |---------|----------------|---------------| | **Polyuria** (frequent urination) | Kidneys excrete excess glucose, pulling water with it. | Often early, especially in Type 1. | | **Polydipsia** (excessive thirst) | Dehydration from fluid loss via urine. | Early. | | **Polyphagia** (increased hunger) | Cells can’t use glucose, so the body signals for more food. | Early. | | **Weight loss** (especially in Type 1) | Body breaks down fat and muscle for energy. | Early in Type 1; later in Type 2. | | **Fatigue / Weakness** | Energy production is impaired. | Early. | | **Blurred vision** | Fluid shifts in the lens of the eye. | Early. | | **Slow‑healing cuts or infections** | High glucose impairs immune function and wound healing. | Variable. | | **Numbness or tingling in hands/feet** (neuropathy) | Chronic high glucose damages nerves. | Usually later. | | **Darkened skin patches** (acanthosis nigricans) | Often seen in insulin resistance. | Later, especially in Type 2. | | **Recurrent infections** (UTIs, yeast, skin) | Immune suppression. | Variable. | | **Unexplained weight loss** (Type 2) | Less common but can occur in advanced disease. | Late. | > **Note:** Some people, especially those with Type 2 diabetes, may have *no symptoms* for years. Routine screening is important if you have risk factors. --- ## 2. Causes ### A. Type 1 Diabetes (T1D) | Factor | Explanation | |--------|-------------| | **Autoimmune destruction of β‑cells** | The immune system mistakenly attacks insulin‑producing cells in the pancreas. | | **Genetic predisposition** | Certain HLA genes increase risk. | | **Environmental triggers** | ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
pdjack/roberta-base-emotion
pdjack
2025-08-13T06:29:27Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T06:28:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lur601/xlm-roberta-base-finetuned-panx-de-fr
lur601
2025-08-13T06:28:38Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T05:21:13Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - F1: 0.8600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2801 | 1.0 | 715 | 0.1890 | 0.8203 | | 0.1462 | 2.0 | 1430 | 0.1596 | 0.8461 | | 0.0938 | 3.0 | 2145 | 0.1600 | 0.8600 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Jongha611/my_summary_model
Jongha611
2025-08-13T06:26:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:25:29Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_summary_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4567 - Rouge1: 0.1578 - Rouge2: 0.0639 - Rougel: 0.1301 - Rougelsum: 0.13 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.6741 | 1.0 | 248 | 2.5163 | 0.1462 | 0.0512 | 0.1181 | 0.1177 | 20.0 | | 2.6644 | 2.0 | 496 | 2.4567 | 0.1578 | 0.0639 | 0.1301 | 0.13 | 20.0 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
dbdhkddlehlfskawk/my_summary_model
dbdhkddlehlfskawk
2025-08-13T06:26:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:24:19Z
--- library_name: transformers license: apache-2.0 base_model: google-T5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_summary_model This model is a fine-tuned version of [google-T5/t5-small](https://huggingface.co/google-T5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4994 - Rouge1: 0.1621 - Rouge2: 0.0637 - Rougel: 0.1306 - Rougelsum: 0.1305 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.9147 | 1.0 | 124 | 2.6475 | 0.1398 | 0.0512 | 0.1138 | 0.1138 | 20.0 | | 2.6521 | 2.0 | 248 | 2.5263 | 0.1547 | 0.0583 | 0.1242 | 0.1239 | 20.0 | | 2.6913 | 3.0 | 372 | 2.4994 | 0.1621 | 0.0637 | 0.1306 | 0.1305 | 20.0 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
harkase/my_summary_model
harkase
2025-08-13T06:25:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:25:43Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_summary_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4281 - Rouge1: 0.1698 - Rouge2: 0.0714 - Rougel: 0.1413 - Rougelsum: 0.1413 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.62 | 1.0 | 62 | 2.5364 | 0.1426 | 0.0515 | 0.1181 | 0.1184 | 20.0 | | 2.7018 | 2.0 | 124 | 2.4689 | 0.155 | 0.0588 | 0.1282 | 0.1283 | 20.0 | | 2.7183 | 3.0 | 186 | 2.4368 | 0.1661 | 0.0681 | 0.1385 | 0.1384 | 20.0 | | 2.7504 | 4.0 | 248 | 2.4281 | 0.1698 | 0.0714 | 0.1413 | 0.1413 | 20.0 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
AmanPriyanshu/gpt-oss-16.7b-specialized-health_or_medicine-pruned-moe-only-25-experts
AmanPriyanshu
2025-08-13T06:23:47Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:23:02Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (25 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 25 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~16.7B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 25 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 25 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 78.1% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 25 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 25 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.7b-specialized-health_or_medicine-pruned-moe-only-25-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-health_or_medicine-pruned-moe-only-25-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.7b-specialized-health_or_medicine-pruned-moe-only-25-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-health_or_medicine-pruned-moe-only-25-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs three core functions: | Function | What it does | How it works | |----------|--------------|--------------| | **Pumping blood** | Moves oxygen‑rich blood to tissues and returns oxygen‑poor blood to the lungs | The left ventricle contracts to send blood into the aorta; the right ventricle pumps to the pulmonary artery. | | **Maintaining circulation** | Keeps blood flowing through the body’s network of arteries, capillaries, and veins | The heart’s rhythmic contractions create pressure gradients that drive blood through the circulatory system. | | **Regulating blood pressure** | Controls the force and volume of blood flow | The heart’s contractility, heart rate, and the compliance of the vascular system together set arterial pressure. | Together, these functions ensure that oxygen, nutrients, hormones, and waste products are delivered and removed efficiently throughout the body.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |--------|----------|---------| | **Cellular structure** | True cells – have a cell membrane, cytoplasm, ribosomes, and a cell wall (in most). | Not cells – just a protein coat (capsid) surrounding genetic material; no membrane or organelles. | | **Size** | 0.5–5 µm (micrometers). | 20–300 nm (nanometers). Bacteria are ~10–100× larger. | | **Genetic material** | DNA (most) or RNA; usually a single circular chromosome, sometimes plasmids. | DNA or RNA, but never both. | | **Reproduction** | Binary fission (a single cell splits into two). Requires nutrients and a suitable environment. | Must hijack a host cell’s machinery to replicate. No independent metabolism. | | **Metabolism** | Can be autotrophic (photosynthetic, chemosynthetic) or heterotrophic. | No metabolism of their own. | | **Life cycle** | Can grow, divide, and form colonies. | Infect a host cell, replicate, then exit (lysis or budding). | | **Treatment** | Antibiotics target bacterial cell wall, protein synthesis, DNA replication, etc. | Antivirals target viral replication steps; vaccines are the main prevention. | | **Disease mechanisms** | Direct damage (toxins, enzymes), immune response to bacterial components. | Direct damage from viral replication, immune response to viral proteins. | | **Transmission** | Often via contact, food, water, vectors. | Often via droplets, direct contact, vectors, or bodily fluids. | | **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Mycobacterium tuberculosis*. | Influenza virus, HIV, SARS‑CoV‑2, HPV. | **Bottom line:** Bacteria are living, self‑sufficient cells that can grow and divide on their own. Viruses are non‑living particles that can only replicate inside a host cell, relying entirely on the host’s machinery. This fundamental difference shapes how they cause disease, how we diagnose them, and how we treat or prevent infections.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other forms (gestational, monogenic, etc.). Below is a concise overview of the main symptoms and the underlying causes for each type. --- ## 1. Symptoms of Diabetes | Symptom | Typical Onset | Why It Happens | |---------|---------------|----------------| | **Polyuria** (frequent urination) | Early | Kidneys excrete excess glucose, pulling water out of the body. | | **Polydipsia** (increased thirst) | Early | Dehydration from fluid loss triggers thirst. | | **Polyphagia** (increased hunger) | Early | Cells can’t use glucose, so the body signals for more food. | | **Weight loss** (especially in type 1) | Early | Body breaks down fat and muscle for energy. | | **Fatigue / weakness** | Early | Energy isn’t being efficiently used. | | **Blurred vision** | Early‑mid | High glucose causes fluid shifts in the lens. | | **Slow‑healing cuts or infections** | Mid‑to‑late | Poor blood flow and immune function. | | **Numbness or tingling in hands/feet** | Late | Chronic high glucose damages nerves (diabetic neuropathy). | | **Red, swollen, or painful gums** | Late | Diabetes worsens gum disease. | | **Recurrent urinary tract infections** | Late | High glucose in urine promotes bacterial growth. | | **Skin changes (dry, itchy, fungal infections)** | Late | Poor circulation and immune changes. | > **Note:** Many people with type 2 diabetes are asymptomatic for years; routine screening is essential, especially if risk factors are present. --- ## 2. Causes (Pathophysiology) | Type | Primary Cause | Key Mechanisms | |------|---------------|----------------| | **Type 1 Diabetes** | Autoimmune destruction of pancreatic β‑cells | • Autoantibodies (GAD, IA‑2, ZnT8) target β‑cells.<br>• T‑cell mediated cytotoxicity.<br>• Result: Insulin deficiency → hyperglycemia. | | **Type 2 Diabetes** | Insulin resistance + β‑cell dysfunction | • Obesity ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Leonlee233/qwen2.5-3b-instruct-trl-sft-ChartQA
Leonlee233
2025-08-13T06:21:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-13T04:09:59Z
--- base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: transformers model_name: qwen2.5-3b-instruct-trl-sft-ChartQA tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for qwen2.5-3b-instruct-trl-sft-ChartQA This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Leonlee233/qwen2.5-3b-instruct-trl-sft-ChartQA", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.5.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AmanPriyanshu/gpt-oss-16.1b-specialized-health_or_medicine-pruned-moe-only-24-experts
AmanPriyanshu
2025-08-13T06:21:17Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:20:25Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (24 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 24 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~16.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 24 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 24 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 75.0% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 24 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 24 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.1b-specialized-health_or_medicine-pruned-moe-only-24-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.1b-specialized-health_or_medicine-pruned-moe-only-24-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.1b-specialized-health_or_medicine-pruned-moe-only-24-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.1b-specialized-health_or_medicine-pruned-moe-only-24-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs three core functions: | Function | What it does | How it works | |----------|--------------|--------------| | **Pumping blood** | Moves oxygen‑rich blood from the body to the lungs and back, and delivers oxygenated blood to tissues. | The heart’s chambers contract in a coordinated rhythm (systole) and relax (diastole), creating pressure gradients that drive blood flow. | | **Maintaining circulation** | Keeps blood pressure and flow steady so organs receive the nutrients and oxygen they need. | The heart’s electrical conduction system (SA node, AV node, bundle branches, Purkinje fibers) ensures timely, synchronized contractions that sustain a steady cardiac output. | | **Regulating blood volume and pressure** | Adjusts the amount of blood pumped per beat (stroke volume) and overall cardiac output to match the body’s metabolic demands. | Reflexes (baroreceptor, chemoreceptor) and hormonal signals (e.g., adrenaline, angiotensin) modify heart rate, contractility, and vascular tone. | Together, these functions allow the heart to circulate blood continuously, delivering oxygen and nutrients while removing waste products, thereby sustaining life.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |--------|----------|---------| | **Cellular structure** | True cells – have a cell membrane, cytoplasm, ribosomes, and a cell wall (in most). | Not cells – just a protein coat (capsid) surrounding genetic material; no membrane or organelles. | | **Size** | 0.5–5 µm (micrometers) – visible under a light microscope. | 20–300 nm (nanometers) – require an electron microscope to see. | | **Genetic material** | DNA or RNA, usually circular plasmids plus a chromosome; can be single‑ or double‑stranded. | DNA or RNA, single‑ or double‑stranded; always encapsulated in a capsid. | | **Reproduction** | Binary fission (cell division) – they grow and split into two identical daughter cells. | Must hijack a host cell’s machinery to replicate their genome and assemble new virions. | | **Metabolism** | Autonomous – can metabolize nutrients, produce energy, and grow independently. | No metabolism of their own – rely entirely on the host cell’s enzymes and energy. | | **Life cycle** | Can live freely in many environments (soil, water, body). | Only survive outside a host for a short time; must infect a host cell to replicate. | | **Treatment** | Antibiotics (target cell wall, ribosomes, DNA replication, etc.) can kill or inhibit them. | Antiviral drugs target specific viral enzymes or entry steps; antibiotics are ineffective. | | **Disease mechanisms** | Direct damage (toxins, enzymes), immune response to bacterial components, or secondary infections. | Direct damage from viral replication, immune response to viral antigens, or immune evasion strategies. | | **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Mycobacterium tuberculosis*. | Influenza virus, HIV, SARS‑CoV‑2, HPV. | **Bottom line:** Bacteria are living, self‑sustaining cells that can grow and divide on their own, whereas viruses are non‑living particles that need a host cell to replicate. This fundamental difference shapes how they cause disease, how they’re treated, and how they’re studied.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other forms (gestational, monogenic, etc.). Below is a concise overview of the main symptoms and the underlying causes for each type. --- ## 1. Type 1 Diabetes (T1D) | **Symptoms** | **Typical Onset** | **Underlying Cause** | |--------------|------------------|----------------------| | • Polyuria (frequent urination) | Childhood or adolescence | Autoimmune destruction of pancreatic β‑cells → insulin deficiency | | • Polydipsia (excessive thirst) | | | | • Polyphagia (increased hunger) | | | | • Unintentional weight loss | | | | • Fatigue, weakness | | | | • Blurred vision | | | | • Slow‑healing cuts or infections | | | | • Ketoacidosis (in severe cases) – nausea, vomiting, abdominal pain, fruity breath | | | | • Rapid heart rate, dehydration | | | **Key Points** - The immune system mistakenly targets insulin‑producing β‑cells in the pancreas. - Genetic predisposition (HLA genes) + environmental triggers (viruses, diet) are thought to initiate the autoimmune attack. - Once β‑cells are destroyed, the body cannot produce insulin, so exogenous insulin therapy is required. --- ## 2. Type 2 Diabetes (T2D) | **Symptoms** | **Typical Onset** | **Underlying Cause** | |--------------|------------------|----------------------| | • Polyuria, polydipsia, polyphagia | Often adult‑onset, but increasingly seen in adolescents | Insulin resistance + relative insulin deficiency | | • Fatigue, blurred vision | | | | • Slow‑healing wounds, frequent infections | | | | • Peripheral neuropathy (tingling, numbness) | | | | • Vision changes, eye problems (retinopathy) | | | | • Cardiovascular symptoms (chest pain, shortness of breath) | | | | • No obvious weight loss; may be overweight/obese | | | **Key Points** - **Insulin resistance**: cells (especially muscle, liver, fat) become less responsive to insulin. - **β‑cell dysfunction**: the pancreas cannot ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
llencia/blockassist-bc-wiry_wise_hedgehog_1755065957
llencia
2025-08-13T06:19:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:19:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amoogeona/FrozenLake
amoogeona
2025-08-13T06:19:24Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-08-13T06:19:07Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: FrozenLake results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="amoogeona/FrozenLake", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ajtorek/bridgetower-refcoco
ajtorek
2025-08-13T06:19:13Z
40
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-07-31T22:39:04Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755065858
IvanJAjebu
2025-08-13T06:19:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:18:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VaarunC/gpt-oss-20b-lora
VaarunC
2025-08-13T06:18:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:18:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VaarunC/gpt-oss-20b-merged
VaarunC
2025-08-13T06:18:40Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T12:21:43Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
llencia/blockassist-bc-wiry_wise_hedgehog_1755065776
llencia
2025-08-13T06:16:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:16:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SeungJun3214/wifi-gemma3-model2
SeungJun3214
2025-08-13T06:16:50Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "endpoints_compatible", "region:us" ]
null
2025-08-13T04:52:10Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: wifi-gemma3-model2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for wifi-gemma3-model2 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SeungJun3214/wifi-gemma3-model2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.55.0 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lur601/xlm-roberta-base-finetuned-panx-de
lur601
2025-08-13T06:15:46Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-16T10:25:43Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1377 - F1: 0.8581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.258 | 1.0 | 525 | 0.1546 | 0.8217 | | 0.1288 | 2.0 | 1050 | 0.1342 | 0.8513 | | 0.0824 | 3.0 | 1575 | 0.1377 | 0.8581 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
acidjp/blockassist-bc-pesty_extinct_prawn_1755065312
acidjp
2025-08-13T06:15:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:14:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Shekharmeena/Bhojpuri_text_to_speech
Shekharmeena
2025-08-13T06:14:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:canopylabs/3b-hi-pretrain-research_release", "base_model:finetune:canopylabs/3b-hi-pretrain-research_release", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T06:11:15Z
--- base_model: canopylabs/3b-hi-pretrain-research_release tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Shekharmeena - **License:** apache-2.0 - **Finetuned from model :** canopylabs/3b-hi-pretrain-research_release This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755063876
milliarderdol
2025-08-13T06:14:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring rough scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:13:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring rough scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755065595
llencia
2025-08-13T06:13:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:13:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-14.3b-specialized-health_or_medicine-pruned-moe-only-21-experts
AmanPriyanshu
2025-08-13T06:13:15Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:12:33Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (21 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 21 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~14.3B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 21 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 21 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 65.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 21 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 21 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-14.3b-specialized-health_or_medicine-pruned-moe-only-21-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-health_or_medicine-pruned-moe-only-21-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-14.3b-specialized-health_or_medicine-pruned-moe-only-21-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-health_or_medicine-pruned-moe-only-21-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart performs three core functions that keep the body alive and functioning: | Function | What it does | How it works | |----------|--------------|--------------| | **Pumping** | Moves blood throughout the body | The left ventricle contracts to push oxygen‑rich blood into the aorta; the right ventricle pumps de‑oxygenated blood into the pulmonary artery. | | **Circulation** | Delivers oxygen, nutrients, hormones, and removes waste | Blood travels through arteries → capillaries → veins, exchanging substances with tissues. | | **Regulation** | Maintains blood pressure and flow | The heart’s intrinsic pacemaker (SA node) sets rhythm; autonomic nerves and hormones adjust rate and force of contraction to meet the body’s needs. | Together, these functions keep tissues supplied with oxygen and nutrients while removing metabolic waste, enabling the body to function and respond to changing demands.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |--------|----------|--------| | **Size** | Microscopic, 0.5–5 µm (visible under a microscope) | 20–200 nm (too small for a light microscope) | | **Structure** | Complex, cell‑like structure: cell wall, membrane, cytoplasm, organelles (e.g., ribosomes) | Very simple: a nucleic acid genome (DNA or RNA) surrounded by a protein coat (capsid); some have a lipid envelope | | **Life Cycle** | Independent life form; can grow, divide, and reproduce on their own (metabolism, energy production) | Not a living organism on its own; must infect a host cell and hijack its machinery to replicate | | **Reproduction** | Divide by binary (or sometimes more complex) cell division (mitosis‑like) | Replicate by copying their genome inside a host cell, then assembling new virions | | **Genetic Material** | DNA (usually double‑stranded) | DNA or RNA (single‑ or double‑stranded) | | **Infection Mechanism** | Can colonize or invade tissues, produce toxins, or cause inflammation; often spread by contact, food, water, vectors | Attach to specific host receptors, enter cells, use host enzymes to replicate; often cause cell lysis or dormancy | | **Treatment** | Antibiotics (target cell wall, protein synthesis, DNA replication, etc.) | Antiviral drugs (target viral enzymes or replication steps); vaccines; no antibiotics work | | **Evolutionary Complexity** | More complex genomes, can exchange genes via plasmids, conjugation, etc. | Simpler genomes; evolve mainly by mutation and recombination | | **Examples** | *Staphylococcus aureus*, *E. coli*, *Bacillus subtilis* | Influenza virus, HIV, SARS‑CoV‑2, HPV | **Bottom line:** Bacteria are true, independent microorganisms that can live and reproduce on their own, whereas viruses are essentially “parasite‑like” particles that need a host cell to make copies of themselves. This fundamental difference shapes how they cause disease, how we diagnose them, and how we treat or prevent infections.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder in which the body cannot regulate blood glucose properly. The two main types—Type 1 and Type 2—share many symptoms but differ in underlying causes. --- ## 1. Symptoms (common to both types) | Symptom | Typical Onset | Why it Happens | |---------|---------------|----------------| | **Polyuria** (excessive urination) | Often first sign | High blood glucose spills into urine, pulling water with it (osmotic diuresis). | | **Polydipsia** (excessive thirst) | Usually follows polyuria | Loss of fluid through urine leads to dehydration; kidneys signal thirst. | | **Polyphagia** (increased hunger) | Common in Type 1; variable in Type 2 | Cells can’t get glucose → body signals “need more food.” | | **Weight loss** | More pronounced in Type 1 | Body breaks down fat and muscle for energy when glucose can’t be used. | | **Fatigue / weakness** | Early symptom | Energy deficit from impaired glucose utilization. | | **Blurred vision** | Can appear early | Hyperglycemia causes fluid shifts in the lens, altering its shape. | | **Slow‑healing cuts or infections** | Later symptom | High glucose impairs immune function and tissue repair. | | **Numbness or tingling in extremities** | Later symptom | Chronic hyperglycemia damages nerves (diabetic neuropathy). | | **Frequent infections (e.g., urinary, skin, oral)** | Later symptom | Impaired immune response. | > **Note:** In Type 2 diabetes, many people are asymptomatic for years; early signs may be subtle (e.g., mild fatigue, occasional blurred vision). --- ## 2. Causes | Type | Primary Cause | Key Risk Factors | |------|---------------|-----------------| | **Type 1** | Autoimmune destruction of pancreatic β‑cells → insulin deficiency | Genetic predisposition, viral infections (e.g., enteroviruses), environmental triggers, early childhood onset | | **Type 2** | Insulin resistance + relative insulin deficiency | Obesity, sedentary lifestyle, poor diet, family history, age ≥45, certain ethnicities (African‑American, Hispanic, Native American, Asian), gestational diabetes history | | **Gestational** | Hormonal changes during pregnancy → insulin resistance ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755065500
IvanJAjebu
2025-08-13T06:13:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:12:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
desibond/blockassist-bc-thriving_mighty_finch_1755062167
desibond
2025-08-13T06:11:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving mighty finch", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:11:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving mighty finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-mimic_sizable_finch_1755064115
motza0025
2025-08-13T06:11:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mimic sizable finch", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:11:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mimic sizable finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts
AmanPriyanshu
2025-08-13T06:08:29Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:07:51Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (19 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~13.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 19 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 19 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 59.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart performs three core functions that keep the body alive and functioning: | Function | What it does | How it works | |----------|--------------|--------------| | **Pumping (circulation)** | Moves blood throughout the body, delivering oxygen, nutrients, hormones, and removing waste. | The left ventricle ejects oxygen‑rich blood into the aorta; the right ventricle sends oxygen‑poor blood into the pulmonary artery. | | **Pressure regulation** | Maintains blood pressure within a range that allows tissues to receive adequate blood flow. | The heart’s contractility, heart rate, and the resistance of the vascular system together determine systolic and diastolic pressures. | | **Electrical coordination** | Generates and propagates the electrical impulses that trigger each heartbeat, ensuring the chambers contract in a synchronized rhythm. | The sinoatrial (SA) node initiates the impulse, which travels through the atrioventricular (AV) node, bundle branches, and Purkinje fibers to the ventricles. | Together, these functions keep the circulatory system operating efficiently, supporting all metabolic processes in the body.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences** | Feature | Bacteria | Viruses | |--------|----------|--------| | **Basic nature** | Living, self‑replicating cells | Non‑living particles that need a host cell to reproduce | | **Size** | ~0.2–2 µm (micrometers) | ~20–200 nm (nanometers) | | **Structure** | Cell wall, membrane, cytoplasm, DNA (often circular), ribosomes, organelles (e.g., mitochondria in some) | Protein coat (capsid) surrounding genetic material (DNA or RNA); no cellular machinery | | **Genetic material** | DNA (usually circular plasmids + chromosome) | DNA or RNA (single or double strand) | | **Replication** | Independent: divide by mitosis (bacterial cell division) | Dependent: hijack host cell’s replication machinery | | **Metabolism** | Can photosynthesize, oxidize, ferment, etc. | No metabolism; rely entirely on host cell | | **Infection mechanism** | Can colonize surfaces, form biofilms, or invade tissues; often spread by contact, airborne droplets, or vectors | Attach to specific host receptors, penetrate, replicate inside host cell | | **Treatment** | Antibiotics (target cell wall, protein synthesis, DNA replication, etc.) | Antiviral drugs (target viral enzymes or entry), vaccines; antibiotics ineffective | | **Evolutionary classification** | Organisms (prokaryotes) | Pathogens (not classified as organisms) | | **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Bacillus subtilis* | Influenza virus, HIV, SARS‑CoV‑2, HPV | **Bottom‑line:** - **Bacteria** are true living cells that can survive and grow on their own. - **Viruses** are essentially “genetic packages” that need a living cell to make copies of themselves.<|return|> ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder in which the body cannot use glucose (sugar) properly. The two main types—Type 1 and Type 2—have overlapping symptoms but differ in their underlying causes. Below is a concise overview of the most common signs, symptoms, and the key factors that lead to each type. --- ## 1. Symptoms (Signs that may indicate diabetes) | Symptom | Typical Onset | Why It Happens | |---------|---------------|----------------| | **Polyphagia** (excessive hunger) | Often early | Body can’t use glucose → cells starve → hunger signals | | **Polydipsia** (excessive thirst) | Early | High blood sugar pulls water out of cells → dehydration | | **Polyuria** (frequent urination) | Early | Kidneys try to excrete excess glucose → osmotic diuresis | | **Weight loss** (especially in Type 1) | Early | Body burns fat & muscle for energy | | **Fatigue / weakness** | Early | Cells lack glucose → low energy | | **Blurred vision** | Early‑mid | Hyperglycemia swells the lens, altering focus | | **Slow wound healing** | Mid‑to‑late | Poor circulation & immune function | | **Frequent infections** (skin, urinary, thrush) | Mid‑to‑late | High glucose supports bacterial growth; immune dysfunction | | **Peripheral neuropathy** (tingling, numbness) | Late | Chronic hyperglycemia damages nerves | | **Foot ulcers / amputations** | Late | Neuropathy + poor circulation | | **Diabetic ketoacidosis** (Type 1) | Sudden | Insulin deficiency → fat breakdown → ketones → acidosis | | **Hyperosmolar hyperglycemic syndrome** (Type 2) | Sudden | Extremely high glucose → dehydration, altered mental status | > **Note:** Many people, especially with Type 2, may have *no symptoms* for years. Routine screening is essential if risk factors are present. --- ## 2. Causes (Underlying mechanisms) ### Type 1 Diabetes (T1D) | Cause | Key Features | |-------|--------------| | **Autoimmune destruction of β‑cells** | The immune system attacks insulin‑producing cells in the pancreas. | | **Genetic predisposition** | Certain H ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
kooldark/majicmixRealistic_v7-inpainting
kooldark
2025-08-13T06:06:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-13T06:03:26Z
--- license: apache-2.0 ---
maidacundo/annie-lite-v0.2.6-qwen3-8b
maidacundo
2025-08-13T06:06:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T05:26:58Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** maidacundo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
doodod/Turn-Detector-Qwen2.5-0.5B-Instruct
doodod
2025-08-13T06:05:22Z
8
0
null
[ "onnx", "safetensors", "qwen2", "zh", "en", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct", "license:mit", "region:us" ]
null
2025-08-05T10:35:41Z
--- license: mit language: - zh - en base_model: - Qwen/Qwen2.5-0.5B-Instruct --- # Introduction > Voice Chat Pipeline -> ASR + TurnDetector + VAD + LLM + TTS In the Voice Chat Pipeline, if we only rely on **VAD (Voice Activity Detection)** to determine whether the user's current turn input has ended, we cannot accurately handle situations where users pause while thinking. When there are pauses during the current turn input that hasn't been completed yet, VAD will detect the pause and prematurely judge that the sentence has ended, but semantically the sentence is not yet complete. This introduces the **Turn-Detector Model**. The turn detection model is mainly applied in voice + text modal dialogue scenarios. At the semantic level, the turn detection model can analyze the text information transcribed by the ASR model at the semantic level, more accurately determining whether the current user input has ended. The Turn-Detector Model chooses small-parameter (0.5B/0.6B) large models based on Transformer architecture that have undergone instruction fine-tuning, with the main task being to predict the probability of the next_token being `<|im_end|>`. - **Task**: Semantic-level turn recognition, predicting the probability of next_token being `<|im_end|>` - **Model**: Small-parameter models after instruction fine-tuning (Qwen2.5-0.5B-Instruct, Qwen3-0.6B) - **Goal**: Reduce inaccurate VAD interruptions in voice dialogue pipelines (e.g., pauses caused while thinking of the next word) ``` # 1. get the user input How tall is the Eiffel Tower # 2. apply_chat_template <|im_start|>user<|im_sep|>How tall is the Eiffel Tower<|im_end|> # 3. cut <|im_end|> <|im_start|>user<|im_sep|>How tall is the Eiffel Tower # 4. predict next token ``` **Language**: Chinese, English **GitHub Page**: https://github.com/zxsddcs/Turn-Detector # Dataset The turn detection model is mainly applied in Chinese and English voice + text modal dialogue scenarios, with input data types mostly being common text instruction data and colloquial chat dialogue data. Therefore, the dataset uses public datasets such as Alpaca, MagicData (ASR dialogue dataset), ShareChatX, etc. - Alpaca - Magicdata - ShareChatX Characteristics of ASR transcribed text - Sometimes sentence endings don't contain punctuation marks - There may be filler words or ... during the process Dataset optimization based on ASR transcribed text characteristics - Sentence filtering: Call large models to analyze current input content, retaining semantically complete and colloquial data from the dataset - Filler word insertion: Randomly insert 1 filler word in sentences to simulate the actual effect of spoken dialogue Call large models to generate Chinese and English filler word tables ```python en_words = ['uh', 'um', 'ah', 'er', 'hmm', ...] zh_words = ['嗯', '啊', '哦', '呃', '那个', '对吧', ...] ``` Dataset optimization effect example ```json [ { "instruction": "How tall is the Eiffel Tower", "input": "", "output": "" }, { "instruction": "How tall is the um... Eiffel Tower", "input": "", "output": "" }, { "instruction": "Um how tall is the Eiffel Tower", "input": "", "output": "" }, ... ] ``` # Quantization ```python from optimum.onnxruntime import ORTModelForCausalLM from optimum.onnxruntime.configuration import AutoQuantizationConfig from optimum.onnxruntime import ORTQuantizer from transformers import AutoTokenizer model_checkpoint = "" save_directory = "" ort_model = ORTModelForCausalLM.from_pretrained( model_checkpoint, export=True, use_cache=False, ) tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ort_model.save_pretrained(save_directory) tokenizer.save_pretrained(save_directory) qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False) quantizer = ORTQuantizer.from_pretrained(ort_model) quantizer.quantize(save_dir=save_directory, quantization_config=qconfig) ``` # References ## Text - Think Beyond VAD : End-of-Turn Detection for Voice Bot: A Python Implementation (https://medium.com/@manoranjan.rajguru/end-of-turn-detection-with-transformers-a-python-implementation-23bd74f621f3) - How to build smarter turn detection for Voice AI (https://blog.speechmatics.com/semantic-turn-detection#wrapping-up-where-were-headed-next) - LiveKit (https://docs.livekit.io/agents/build/turns/turn-detector/#overview) - 基于大语言模型实现文本端点检测 (https://blog.csdn.net/qq_33200967/article/details/145210382) ## Audio - Speculative End-Turn Detector for Efficient Speech Chatbot Assistant (https://arxiv.org/html/2503.23439v1) - Smart Turn v2 (https://www.daily.co/blog/smart-turn-v2-faster-inference-and-13-new-languages-for-voice-ai/)
doodod/Turn-Detector-Qwen3-0.6B
doodod
2025-08-13T06:04:28Z
10
0
null
[ "onnx", "safetensors", "qwen3", "zh", "en", "base_model:Qwen/Qwen3-0.6B", "base_model:quantized:Qwen/Qwen3-0.6B", "license:mit", "region:us" ]
null
2025-08-04T06:44:50Z
--- license: mit language: - zh - en base_model: - Qwen/Qwen3-0.6B --- # Introduction > Voice Chat Pipeline -> ASR + TurnDetector + VAD + LLM + TTS In the Voice Chat Pipeline, if we only rely on **VAD (Voice Activity Detection)** to determine whether the user's current turn input has ended, we cannot accurately handle situations where users pause while thinking. When there are pauses during the current turn input that hasn't been completed yet, VAD will detect the pause and prematurely judge that the sentence has ended, but semantically the sentence is not yet complete. This introduces the **Turn-Detector Model**. The turn detection model is mainly applied in voice + text modal dialogue scenarios. At the semantic level, the turn detection model can analyze the text information transcribed by the ASR model at the semantic level, more accurately determining whether the current user input has ended. The Turn-Detector Model chooses small-parameter (0.5B/0.6B) large models based on Transformer architecture that have undergone instruction fine-tuning, with the main task being to predict the probability of the next_token being `<|im_end|>`. - **Task**: Semantic-level turn recognition, predicting the probability of next_token being `<|im_end|>` - **Model**: Small-parameter models after instruction fine-tuning (Qwen2.5-0.5B-Instruct, Qwen3-0.6B) - **Goal**: Reduce inaccurate VAD interruptions in voice dialogue pipelines (e.g., pauses caused while thinking of the next word) ``` # 1. get the user input How tall is the Eiffel Tower # 2. apply_chat_template <|im_start|>user<|im_sep|>How tall is the Eiffel Tower<|im_end|> # 3. cut <|im_end|> <|im_start|>user<|im_sep|>How tall is the Eiffel Tower # 4. predict next token ``` **Language**: Chinese, English **GitHub Page**: https://github.com/zxsddcs/Turn-Detector # Dataset The turn detection model is mainly applied in Chinese and English voice + text modal dialogue scenarios, with input data types mostly being common text instruction data and colloquial chat dialogue data. Therefore, the dataset uses public datasets such as Alpaca, MagicData (ASR dialogue dataset), ShareChatX, etc. - Alpaca - Magicdata - ShareChatX Characteristics of ASR transcribed text - Sometimes sentence endings don't contain punctuation marks - There may be filler words or ... during the process Dataset optimization based on ASR transcribed text characteristics - Sentence filtering: Call large models to analyze current input content, retaining semantically complete and colloquial data from the dataset - Filler word insertion: Randomly insert 1 filler word in sentences to simulate the actual effect of spoken dialogue Call large models to generate Chinese and English filler word tables ```python en_words = ['uh', 'um', 'ah', 'er', 'hmm', ...] zh_words = ['嗯', '啊', '哦', '呃', '那个', '对吧', ...] ``` Dataset optimization effect example ```json [ { "instruction": "How tall is the Eiffel Tower", "input": "", "output": "" }, { "instruction": "How tall is the um... Eiffel Tower", "input": "", "output": "" }, { "instruction": "Um how tall is the Eiffel Tower", "input": "", "output": "" }, ... ] ``` # Quantization ```python from optimum.onnxruntime import ORTModelForCausalLM from optimum.onnxruntime.configuration import AutoQuantizationConfig from optimum.onnxruntime import ORTQuantizer from transformers import AutoTokenizer model_checkpoint = "" save_directory = "" ort_model = ORTModelForCausalLM.from_pretrained( model_checkpoint, export=True, use_cache=False, ) tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ort_model.save_pretrained(save_directory) tokenizer.save_pretrained(save_directory) qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False) quantizer = ORTQuantizer.from_pretrained(ort_model) quantizer.quantize(save_dir=save_directory, quantization_config=qconfig) ``` # References ## Text - Think Beyond VAD : End-of-Turn Detection for Voice Bot: A Python Implementation (https://medium.com/@manoranjan.rajguru/end-of-turn-detection-with-transformers-a-python-implementation-23bd74f621f3) - How to build smarter turn detection for Voice AI (https://blog.speechmatics.com/semantic-turn-detection#wrapping-up-where-were-headed-next) - LiveKit (https://docs.livekit.io/agents/build/turns/turn-detector/#overview) - 基于大语言模型实现文本端点检测 (https://blog.csdn.net/qq_33200967/article/details/145210382) ## Audio - Speculative End-Turn Detector for Efficient Speech Chatbot Assistant (https://arxiv.org/html/2503.23439v1) - Smart Turn v2 (https://www.daily.co/blog/smart-turn-v2-faster-inference-and-13-new-languages-for-voice-ai/)
AmanPriyanshu/gpt-oss-12.0b-specialized-health_or_medicine-pruned-moe-only-17-experts
AmanPriyanshu
2025-08-13T06:04:01Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T06:03:22Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (17 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 17 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~12.0B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 17 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 17 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 53.1% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 17 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 17 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.0b-specialized-health_or_medicine-pruned-moe-only-17-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-health_or_medicine-pruned-moe-only-17-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.0b-specialized-health_or_medicine-pruned-moe-only-17-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-health_or_medicine-pruned-moe-only-17-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular organ that performs several essential functions to keep the body alive and functioning. Its main roles can be grouped into three broad categories: | **Primary Function** | **What It Does** | **Why It Matters** | |---------------------|----------------|----------------| | **Pumping Blood** | The heart contracts (systole) and relaxes (diastole) to push blood through the circulatory system. | Keeps oxygen, nutrients, hormones, and waste products delivered to and removed from every cell. | | **Maintaining Blood Pressure** | By regulating the force and rate of contraction, the heart sets the pressure needed to move blood against resistance in the vessels. | Keeps tissues perfused and prevents shock or organ failure. | | **Regulating Blood Flow** | The heart’s rhythm and contractility are adjusted by the nervous system and hormones to match the body’s needs (e.g., exercise, rest, stress). | Allows the body to respond quickly to changing demands. | ### How the Heart Performs These Functions 1. **Electrical Conduction System** - **Sinoatrial (SA) node**: natural pacemaker that initiates each heartbeat. - **Atrioventricular (AV) node**: delays conduction to allow atria to finish contracting before ventricles. - **Bundle of His → Purkinje fibers**: conduct impulses through ventricles, coordinating ventricular contraction. 2. **Cardiac Muscle (Myocardium)** - **Contractile fibers**: generate force to pump blood. - **Intercalated discs**: specialized junctions that allow rapid spread of electrical impulses and mechanical coupling between cells. 3. **Valvular System** - **Aortic, pulmonary, mitral, and tricuspid valves**: prevent back‑flow of blood, ensuring unidirectional flow from chambers to major vessels. 4. **Autonomic Regulation** - **Sympathetic nervous system**: increases heart rate and contractility (fight‑or‑flight). - **Parasympathetic (vagal) tone**: slows heart rate (rest, relaxation). - **Hormonal influences**: catecholamines, angiotensin II, atrial natriuretic peptide, etc., fine‑tune cardiac output. 5. **Metabolic Support** - **Coronary circulation**: supplies ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need explain differences.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the big differences** | Feature | Bacteria | Viruses | |---|---|---| | **Basic nature** | Single‑cell, living organisms (prokaryotes) | Particles that are not living; they are just a package of genetic material surrounded by a coat (capsid) and sometimes a lipid envelope | | **Size** | 1 µm to 10 µm (about 1000‑10,000 nm) | 0.25 µm to 0.5 µm (250‑500 nm) – roughly 100‑200 nm for most | | **Structure** | Cell wall, plasma membrane, cytoplasm, ribosomes, DNA (often circular) | Genome (DNA or RNA, single or double strand), capsid, sometimes envelope; no internal organelles | | **Reproduction** | Cell division (mitotic‑like) – one cell splits into two, then two into four, etc. | Replication inside a host cell: the virus injects its genome, hijaxes the host’s machinery to make copies of its genome and proteins, then assembles new particles | | **Genetic material** | DNA (most) | DNA or RNA (single‑ or double‑strand) | | **Energy source** | Metabolism – can produce energy from nutrients (photosynthesis, respiration, fermentation) | No metabolism; relies entirely on the host cell’s enzymes and energy | | **Life cycle** | Can survive outside hosts (in soil, water, on surfaces) | Usually need a living host cell to produce new copies; many are short‑lived outside a host | | **Infection mechanism** | Attach to host cells, penetrate, grow, and release new cells (often by lysis or budding) | Attach to host cells, inject genome, replicate, assemble, and exit (lysis, budding, or exocytosis) | | **Immune response** | Recognized by innate and adaptive immunity; can be targeted by antibiotics, phage therapy, etc. | Recognized by innate and adaptive immunity; targeted by antiviral drugs, vaccines, monoclonal antibodies | | **Treatment** | Antibiotics (target cell wall, protein synthesis, DNA replication, etc.) | Antivirals (target viral enzymes, entry, replication, etc.) | | **Examples** | *Escherichia coli*, ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder in which the body cannot regulate blood glucose properly. The two most common types are: | Type | Main cause | Typical symptoms | |------|------------|-----------------| | **Type 1** (juvenile‑onset) | Autoimmune destruction of insulin‑producing β‑cells in the pancreas → little or no insulin | • Sudely rapid weight loss<br>• Polyphagia (constant hunger)<br>• Poluria (fre frequent urination)<br>• Polyphoria (fre frequent thirst)<br>• Fatigue, weakness<br>• Frequent infections (especially in the skin, mouth, and urinary tract)<br>• Delayed wound healing | | **Type 2** (adult‑onset) | Insulin resistance in peripheral tissues + relative insulin deficiency | • Gradual weight gain or obesity (especially abdominal)<br>• Polyphagia, poluria, polyphoria (often milder than type 1)<br>• Fatigue, blurred vision, slow wound healing<br>• Occasional tingling or numbness in extremities (early neuropathy)<br>• Dark, itchy skin (acanthosis nigricans) in some cases | --- ### Common symptoms (present in both types) 1. **Polyphagia** – constant, unrelating hunger. 2. **Poluria** – increased frequency and volume of urination. 3. **Polydipsia** – excessive thirst. 4. **Weight changes** – rapid loss in type 1; gradual gain in type 2. 5. **Fatigue and weakness** – due to cells not getting glucose. 6. **Blurred vision** – hyperglycemia causes fluid shifts in the eye. 7. **Infections** – especially skin, mouth, and urinary tract. 8. **Delayed wound healing** – impaired blood flow and immune function. 9. **Peripheral neuropathy** – tingling, numbness, or pain in hands/feet. 10. **Anemia** – low red‑cell count (especially in type 1). --- ### Causes | Factor | How it contributes to diabetes | |--------|------------------------------| | **Autoimmune attack** (type 1) | The immune system mistakenly targets β‑cells → loss of insulin production. | | **Insulin resistance** (type 2 ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755063933
Sayemahsjn
2025-08-13T06:03:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T06:03:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sankniju/blockassist-bc-plump_carnivorous_chimpanzee_1755064663
sankniju
2025-08-13T05:59:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump carnivorous chimpanzee", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:58:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump carnivorous chimpanzee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-10.8b-specialized-health_or_medicine-pruned-moe-only-15-experts
AmanPriyanshu
2025-08-13T05:59:00Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T05:58:23Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (15 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 15 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~10.8B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 15 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 15 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 46.9% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 15 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 15 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-10.8b-specialized-health_or_medicine-pruned-moe-only-15-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-health_or_medicine-pruned-moe-only-15-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-10.8b-specialized-health_or_medicine-pruned-moe-only-15-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-health_or_medicine-pruned-moe-only-15-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer: main functions: pumping blood, maintaining circulation, delivering oxygen/nutrients, removing waste, regulating blood pressure, electrical conduction, etc.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular organ that performs several essential functions to keep the body alive and functioning. Its primary roles can be grouped into three major categories: | **1. Pumping Blood** | **2. Regulating Blood Flow** | **3. Maintaining Electrical Coordination** | |---|---|---| | • **Cardiac Output** – The heart continuously pumps blood into the systemic and pulmonary circulations, delivering oxygen and nutrients to tissues and removing waste products. | • **Blood Pressure Control** – By adjusting stroke volume and heart rate, the heart helps maintain arterial pressure within a range that supports organ perfusion. | • **Electrical Conduction System** – The sino‑atrial (SA) node, atrioventricular (AV) node, bundle branches, and Purkinje fibers coordinate the heartbeat, ensuring that the atria and ventricles contract in a timely, synchronized fashion. | | • **Stroke Volume** – The amount of blood ejected per beat (≈ 250 mL in a resting adult). | • **Heart Rate** – Beats per minute (≈ 60–70 bpm at rest) that can be modulated by the autonomic nervous system and hormones. | • **Concordance** – The heart’s ability to contract against the resistance of the blood vessels (arterial compliance). | | • **Cardiac Output** – Stroke volume × heart rate (≈ 5 L/min in a resting adult). | • **Vascular Resistance** – The heart responds to changes in peripheral resistance (e.g., vasoconstriction or vasodilation). | • **Pacemaker Activity** – The SA node initiates the impulse; the AV node delays it to allow atrial filling before ventricular contraction. | ### Key Functions in Detail 1. **P pumping blood (Cardiac Output)** * **Systemic circulation** – Oxygenated blood is pumped from the left ventricle into the aorta, supplying all body tissues. * **Pulmonary circulation** – De‑oxygenated blood is pumped from the right ventricle into the lungs for gas exchange. 2. **Regulating blood flow** * **Autoregulation** – The heart can adjust its output to match the metabolic demands of organs (e.g., during exercise, the heart rate ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need explain differences: size, complexity, replication, genetics, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – The Key Differences** | Feature | Bacteria | Viruses | |---|---|---| | **Cellular nature** | True cells – have a distinct cytoplasm, membrane, and organelles (e.g., ribosomes, mitochondria in some). | No cell structure; they are just genetic material (DNA or RNA) surrounded by a protein coat (capsid) and sometimes a lipid envelope. | | **Size** | Microscopic but larger: 1–10 µm (≈ 1000–10 000 nm). | Tiny: 0.1–100 nm (≈ 100–1000 nm). | | **Genetic material** | Usually a single‑stranded or double‑stranded DNA genome (≈ 1–10 M bp). | DNA or RNA genome (≈ 0.1–10 M bp). | | **Replication** | Can replicate autonomically inside a cell: they grow, divide (bacterial cell division), and can produce many copies of themselves. | Must hij‑take a host cell’s machinery to replicate; they cannot divide on their own. | | **Metabolism** | Have metabolic pathways: they can oxidize, ferment, photosynthesize, etc. | No metabolism; they rely entirely on the host cell’s metabolic machinery. | | **Structure** | Cell wall (peptidoglycan in Gram‑positive, peptidoglycan + outer membrane in Gram‑negative), cytoplasmic membrane, ribosomes, sometimes flagella or pili. | Capsid (protein shell) and sometimes a lipid envelope; no ribosomes or other organelles. | | **Life cycle** | Simple: growth → division → death. | Complex: attachment → entry → replication → assembly → release (lytic or latent). | | **Infection mechanism** | Can invade tissues, produce toxins, or compete for nutrients. | Attach to specific receptors, inject genetic material, commandeer host cell. | | **Immune response** | Recognized by innate and adaptive immunity; can be cleared by phagocytosis, antibiotics, etc. | Recognized by innate and adaptive immunity; often evade immune detection by hiding in host genome or altering antigens. | | **Treatment** | Antibiotics (target cell wall synthesis, protein synthesis, etc.). | ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder that occurs when the body cannot use glucose (sugar) properly. The two most common types are type 1 (autoimmune destruction of insulin‑producing β‑cells) and type 2 (insulin resistance with eventual β‑cell failure). Below is a quick‑look guide to the symptoms and the underlying causes for each type. --- ## 1. Symptoms of Diabetes | Symptom | Why it happens | How it shows in everyday life | |---|---|---| | **Polyuria** (excessive urination) | High blood glucose → kidneys try to flush it out | Frequent trips to the bathroom, especially at night | | **Polydipsia** (excessive thirst) | Osmotic effect of glucose + low insulin → brain signals “thirst” | Constant feeling of thirst, even after drinking | | **Polyphagia** (excessive hunger) | Low insulin → cells think you’re starved | Eating more than usual, cravings for sugary foods | | **Weight loss** (especially type 1) | Rapid glucose loss through urine & poor appetite | Noticeable weight drop despite normal or increased food intake | | **Fatigue / weakness** | Cells can’t use glucose → energy deficit | Feeling tired, sluggish, or “low‑energy” | | **Blurred vision** | Fluctuating blood sugar changes fluid balance in the eye | Vision changes, especially after meals or at night | | **Slow wound healing** | Poor circulation & immune function | Cuts or sores that take weeks to heal | | **Increased infections** | Immune system is weakened | Frequent col col, thrush, or skin infections | | **Peripheral neuropathy** | High glucose damages nerves | Tingling, numbness, or burning in feet and hands | | **Gangrene or amputations** | Severe peripheral vascular disease | Rare, but can lead to tissue death in extremities | > **Note**: In type 1 diabetes, symptoms often appear suddenly and can be severe. In type 2, symptoms may develop gradually and can be subtle, especially in the early stages. --- ## 2. Causes of Diabetes ### Type 1 Diabetes (T1D) | Cause | Mechanism | Risk factors | |---|---|---| | **Autoimmune destruction of β‑cells** | The immune system mistakenly attacks ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
aXsalll/blockassist-bc-chattering_galloping_ape_1755063803
aXsalll
2025-08-13T05:57:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "chattering galloping ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:57:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - chattering galloping ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/1538536
seraphimzzzz
2025-08-13T05:57:13Z
0
0
null
[ "region:us" ]
null
2025-08-13T05:56:57Z
[View on Civ Archive](https://civitaiarchive.com/models/1448950?modelVersionId=1638180)
dylandavies984/blockassist-bc-fluffy_fleecy_rooster_1755062974
dylandavies984
2025-08-13T05:57:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fluffy fleecy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:56:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fluffy fleecy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755064567
llencia
2025-08-13T05:56:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:56:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/726085
seraphimzzzz
2025-08-13T05:55:52Z
0
0
null
[ "region:us" ]
null
2025-08-13T05:55:45Z
[View on Civ Archive](https://civitaiarchive.com/models/726430?modelVersionId=812277 (2))
seraphimzzzz/1542630
seraphimzzzz
2025-08-13T05:55:28Z
0
0
null
[ "region:us" ]
null
2025-08-13T05:55:24Z
[View on Civ Archive](https://civitaiarchive.com/models/1445725?modelVersionId=1642193)
mang3dd/blockassist-bc-tangled_slithering_alligator_1755062919
mang3dd
2025-08-13T05:55:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:55:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-9.6b-specialized-health_or_medicine-pruned-moe-only-13-experts
AmanPriyanshu
2025-08-13T05:54:28Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T05:53:59Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (13 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~9.6B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 13 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 13 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 40.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-health_or_medicine-pruned-moe-only-13-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-health_or_medicine-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-health_or_medicine-pruned-moe-only-13-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-health_or_medicine-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer: main functions: pumping blood, regulating blood pressure, oxygen delivery, nutrient transport, endocrine functions (h hormones), etc.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a vital organ that performs several key functions essential for life. Its primary roles can be grouped into three main categories: | **Primary Functions** | **What Does It Do?** | **Why It Matters** | |------------------------|------------------------|------------------------| | **P pumping blood** | The heart’s rhythmic contractions (the heartbeat) move blood through the circulatory system. | Keeps oxygen, nutrients, and waste products circulating throughout the body. | | **Regulating blood pressure** | By adjusting the force of its contractions (stroke volume) and the rate of heartbeats (heart rate), the heart controls the pressure that blood exsors on the blood vessels. | Maints a stable blood pressure that is necessary for proper organ perfusion and fluid balance. | | **O‑delivery & nutrient transport** | The heart’s pumping action ensures that oxygen and nutrients reach every tissue and organ, while also removing waste products (CO₂, metabolic by‑products) from the blood. | Keeps every cell alive and functioning. | In addition, the heart has several secondary functions that support its primary roles: 1. **End‑organ regulation** – The heart’s rhythm and force of contraction are influenced by the autonomic nervous system (sym‑ and parasym‑), hormones (e.g., adrenaline, nor‑adrenaline, angiotensin II), and local chemical signals (e.g., oxygen, CO₂). 2. **Hormone production** – The atrial cells produce atr natriuretic peptide (ANP) and brain natriuretic peptide (BNP), which help regulate fluid balance and blood pressure. 3. **Mechanical support** – The heart’s structure (four chambers, valves, and the conduction system) ensures that blood flows in the correct direction and that the blood volume is properly distributed. **In short:** The heart’s main functions are to pump blood, regulate blood pressure, and deliver oxygen and nutrients to every tissue while removing waste products. These functions keep the body’s internal environment stable and allow all organs to operate efficiently.<|return|> ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need explain differences: size, structure, replication, genetics, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – The Bottom‑Line Differences** | Feature | Bacteria | Viruses | |---------|----------|--------| | **Definition** | Single‑cell, living organisms that can grow, divide, and reproduce on their own. | Tiny particles (≤ 100 nm) that are not living cells; they need a host cell to replicate. | | **Size** | 1 µm – 10 µm (microscopic, but still visible under a microscope). | 100 – 200 nm (usually invisible to the naked eye). | | **Structure** | • Cell wall (peptidoglycan in Gram‑positive, peptidoglycan + teichoic acids in Gram‑negative, or none in archaea). <br>• Cytoplasm, ribosomes, DNA, mitochondria (in e‑eobacteria), flagella, pili, etc. | • No cell‑like organelles. <br>• Envelope (lipid membrane) or capsid (protein shell) that protects the genome. | | **Genetic Material** | • DNA (usually 4 bases, double‑stranded). | • DNA or RNA (single‑stranded or double‑stranded). | | **Replication** | • Cell division (mitosis‑like process). <br>• Can grow exponentially (douts per hour). | • Must hijate host‑cell machinery. <br>• No self‑division; replication is a “copy‑and‑paste” of its genome. | | **Energy Source** | • Metabolism: glycolysis, respiration, photosynthesis, etc. <br>• Can produce ATP, NADH, etc. | • No metabolic pathways; they simply use the host’s energy. | | **Infection Mechanism** | • Attach to host cells, invade, and multiply. <br>• Can be pathogenic or symbiotic. | • Attach to a receptor, inject genome, commandeer host cell to produce new particles. | | **Treatment** | • Antibiics (target cell wall, protein synthesis, DNA replication). | • Antivirals (target viral enzymes, replication steps). | | **Evolutionary History** | • E‑eobacteria (ancestors of all life). <br>• Archaea and pro‑eobacteria ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to explain symptoms and causes.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder that occurs when the body cannot produce enough insulin (type 1) or cannot use insulin effectively (type 2). The result is high blood glucose (hyper‑glycemia) that can damage organs and tissues over time. Below is a quick overview of the most common symptoms and the main causes for each type. --- ## 1. Symptoms of Diabetes | Symptom | Why it happens | When it usually appears | |--------|----------------|------------------------| | **Freckly sweet or sticky mouth** | High glucose in saliva | Early or later | | **Freckly thirst** | Body tries to ex‑elate glucose | Early or later | | **Freckly urination** | Kidneys ex‑elate glucose | Early or later | | **Weight loss** (type 2) | Loss of energy from glucose | Early or later | | **Fatigue / low energy** | Cells can’t use glucose | Early or later | | **Poor wound healing** | High glucose impairs immunity | Later | | **Blurred vision / eye problems** | Hyper‑glycemia damages retina | Later | | **Nnerner pain or numbness** | Damage to small nerves | Later | | **Increased infections** | Immune system weakened | Later | | **Ack (type 1)** | Auto‑immune destruction of beta‑cells | Early (often in 2‑3 weeks) | | **Ack (type 2)** | Insulin resistance & β‑cell exhaustion | Later (months‑years) | > **Note** – Many people with type 2 diabetes have *no* obvious symptoms early on. They may only discover the problem when a routine blood test shows high glucose. --- ## 2. Causes of Diabetes ### Type 1 Diabetes (T‑1D) | Cause | Mechanism | Risk factors | |--------|------------|--------------| | **Auto‑immune attack on pancreatic β‑cells** | The immune system mistakenly targets insulin‑producing cells | • Genetics (family history) <br>• Viral infections (e.g., enteroviruses) <br>• Environmental triggers (e.g., diet, hygiene) | | **Genetic predisposition** | Certain genes (e.g., H‑DQ, H‑DR) increase susceptibility | • First‑generation parents < ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
PhysicsWallahAI/Aryabhata-1.0
PhysicsWallahAI
2025-08-13T05:52:48Z
1,560
96
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "small-language-model", "jee", "exam-centric", "indian-education", "reinforcement-learning", "supervised-finetuning", "model-merging", "rejection-sampling", "mathematics", "ai4education", "physicswallah", "conversational", "en", "arxiv:2508.08665", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-18T07:40:10Z
--- license: cc-by-nc-4.0 tags: - small-language-model - jee - exam-centric - indian-education - reinforcement-learning - supervised-finetuning - model-merging - rejection-sampling - mathematics - ai4education - physicswallah language: - en model_name: PhysicsWallah/Aryabhata-1.0 model_creator: Physics Wallah AI Research model_type: Causal decoder-based model base_model: Qwen/Qwen2.5-Math-7B pipeline_tag: text-generation library_name: transformers --- # Aryabhatta 1.0 : An exam-focused language model for JEE Math ![](benchmark.png) ## Overview **Aryabhata 1.0** is a 7B parameter small language model for mathematics developed by **Physics Wallah AI Research**, optimized for high-stakes Indian competitive exams like **JEE Mains**. Despite its compact size, Aryabhata 1.0 achieves **state-of-the-art performance** on exam-centric reasoning tasks with impressive **token efficiency** and low inference cost. > 🚧 *Aryabhata 1.0 is an **experimental release**. We are actively seeking feedback — please contribute in the Discussion tab of this repo.* --- ## 🧠 Key Features - **Architecture**: 7B parameter causal decoder-based model. - **Exam-Centric Optimization**: Specifically tuned for JEE-level Mathematics reasoning. - **High Accuracy**: - **86%** on **JEE Mains January 2025** session. - **90.2%** on **JEE Mains April 2025** session. - **Token Efficiency**: Operates effectively around a **~2K token window**, compared to ~8K required by other reasoning models. - **Compute Efficient**: Trained on a **1x2 NVIDIA H100 GPU** using optimized pipeline. --- ## 🛠️ Training Details - **Training Data**: ~130K problem-solution pairs curated from proprietary Physics Wallah exam datasets. - **Training Pipeline**: - **Model Merging** - **Rejection Sampling** - **Supervised Fine-Tuning (SFT)** - **Reinforcement Learning with Verifiable Rewards (RLVR)** ### 🔀 Model Merging We began with model merging (Weighted average) to build a strong initialization (Aryabhata 0.5) by combining diverse model capabilities: * Qwen 2.5 Math: A robust math-centric LLM with solid symbolic math foundations. * Ace Math: An enhanced version of Qwen 2.5 Math, fine-tuned by NVIDIA for improved accuracy in mathematics benchmarks. * DeepSeek R1 Distill Qwen: A long-form reasoning model, fine-tuned on reasoning traces distilled from DeepSeek R1. ### 📚 Data Curation + Rejection Sampling We extracted ~250K raw questions from Physics Wallah's internal database and applied aggressive filtering and cleaning: * Removed: diagram-based, non-English, and option-heavy questions. * Kept: questions matching the distribution of JEE Main 2019–2024. Final curated dataset: ~130K high-quality questions. For each question: * Generated 4 CoTs using Aryabhata 0.5. * Retained only those leading to correct final answers. Resulting Dataset: * ~100K questions * ~350K high-quality CoTs We used this dataset for SFT. ### 🎯 Reinforcement Learning with Verifiable Rewards (RLVR) We used a custom in-house variant of Group Relative Policy Optimization (GRPO), adapted for math-specific reward functions. * Removed KL-divergence penalty * Removed clipping We used RLVR on the remaining ~30K questions. This multi-phase training strategy allows Aryabhata 1.0 to capture **pedagogy-aligned reasoning patterns**, making it highly effective for solving real student queries in mathematics. --- ## 📊 Performance Highlights ### Evaluation Setup All evaluations were performed with temperature = 0.0, and we report pass@1 accuracy. #### Evaluation Datasets We evaluated the model on two sets of official JEE Mains 2025 mathematics papers: * January Session: 10 question papers containing 250 questions. * April Session: 9 question papers containing 225 questions. Each paper includes a mix of: * Multiple Choice Questions (MCQs) with one correct option * Numeric Answer Type (NAT) questions requiring precise numerical responses #### Evaluation Metric We used a composite evaluation metric to reflect real-world grading rigor and reduce false positives: 1. Float Match * Compares predicted and target answers within a tolerance (±1e-9) * Handles rounding artifacts and small numerical errors robustly 2. String Match * Used for symbolic answers (e.g., fractions, radicals) * Uses strict exact match — predictions must match ground truth character-for-character 3. LLM-as-Judge (GPT-4o-mini) * Used for Mathematical equivalence for ambiguous formats ### 🔹 Accuracy Comparison Across Models ![](accuracy.png) > *Aryabhata has the best accuracy on JEE Main Maths, on par with frontier models* ### 🔹 Accuracy vs Token Usage ![](accuracy-vs-token.png) > *Aryabhata is on par with frontier models in terms of accuracy vs token usage* --- ## 🔧 Intended Use **Primary Use Cases**: - Competitive exam preparation (JEE Main level mathematics problems) - Question answering and doubt-solving systems - Educational tutoring and concept explanation ## 💡 How to Use ### 🧪 Using with 🤗 Transformers ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_id = "PhysicsWallahAI/Aryabhata-1.0" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Define stop strings stop_strings = ["<|im_end|>", "<|end|>", "<im_start|>", "⁠```python\n", "⁠<|im_start|>", "]}}]}}]"] def strip_bad_tokens(s, stop_strings): for suffix in stop_strings: if s.endswith(suffix): return s[:-len(suffix)] return s # Create generation config (can also set temperature, top_p, etc.) generation_config = GenerationConfig( max_new_tokens=4096, stop_strings = stop_strings ) query = 'Find all the values of \\sqrt[3]{1}' messages = [{'role': 'system', 'content': 'Think step-by-step; put only the final answer inside \\boxed{}.'}, {'role': 'user', 'content': query}] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer([text], return_tensors="pt") outputs = model.generate(**inputs, generation_config=generation_config, tokenizer=tokenizer) print(strip_bad_tokens(tokenizer.decode(outputs[0], skip_special_tokens=True), stop_strings)) ```` --- ### ⚡ Using with vLLM To run the model efficiently using vLLM: ```python from vllm import LLM, SamplingParams # Initialize model (downloads from Hugging Face if not local) llm = LLM(model="PhysicsWallahAI/Aryabhata-1.0") # Define prompt and sampling configuration query = 'Find all the values of \\sqrt[3]{1}' messages = [{'role': 'system', 'content': 'Think step-by-step; put only the final answer inside \\boxed{}.'}, {'role': 'user', 'content': query}] sampling_params = SamplingParams(temperature=0.0, max_tokens=4*1024, stop=["<|im_end|>", "<|end|>", "<im_start|>", "⁠```python\n", "⁠<|im_start|>", "]}}]}}]"]) # Run inference results = llm.chat(messages, sampling_params) # Print result print(results[0].outputs[0].text.strip()) ``` --- Read more about Aryabhata 1.0 in our [Technical Report](https://arxiv.org/abs/2508.08665) --- ## 🚀 Roadmap **Aryabhata 2.0** (Upcoming): - Extending domain coverage to **Physics** and **Chemistry** - Supporting **JEE Advanced**, **NEET**, and **Foundation syllabus** - Further optimization for affordability and accuracy in real-time deployments --- ## 🤝 Citation If you use this model, please cite: ```bibtex @misc{Aryabhata2025, title = {Aryabhata 1.0: A compact, exam-focused language model tailored for mathematics in Indian competitive exams, especially JEE Main.}, author = {Physics Wallah AI Research}, year = {2025}, note = {\url{https://huggingface.co/PhysicsWallahAI/Aryabhata-1.0}}, }
yibingchen/SSL-BET
yibingchen
2025-08-13T05:51:53Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-13T05:49:58Z
--- license: apache-2.0 ---
indoempatnol/blockassist-bc-fishy_wary_swan_1755062789
indoempatnol
2025-08-13T05:50:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:50:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
llencia/blockassist-bc-wiry_wise_hedgehog_1755064190
llencia
2025-08-13T05:50:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry wise hedgehog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T05:50:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry wise hedgehog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
strectelite/mnli-llama3.2-1b-qlora-10k
strectelite
2025-08-13T05:50:34Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "lora", "transformers", "text-classification", "en", "dataset:nyu-mll/glue", "arxiv:2305.14314", "base_model:meta-llama/Llama-3.2-1B-Instruct", "license:apache-2.0", "region:us" ]
text-classification
2025-08-13T05:34:01Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft tags: - base_model:adapter:meta-llama/Llama-3.2-1B-Instruct - lora - transformers license: apache-2.0 datasets: - nyu-mll/glue language: - en metrics: - accuracy - f1 - matthews_correlation pipeline_tag: text-classification --- # MNLI - LLaMA 3.2 1B - QLoRA (10k subset, 4-bit) ## Model Summary This is a **QLoRA fine-tuned** version of [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the **MNLI** (Multi-Genre Natural Language Inference) dataset from [GLUE](https://huggingface.co/datasets/glue/viewer/mnli). - **Base model:** LLaMA 3.2 1B Instruct - **Fine-tuning method:** [QLoRA](https://arxiv.org/abs/2305.14314) with 4-bit quantization - **Train subset:** 10k samples (8k train / 1k val / 1k test from train split) - **Evaluation:** Official GLUE dev sets (matched / mismatched) + held-out 1k test split - **Trainable parameters:** 5.64M (0.45% of base model) - **Hardware:** NVIDIA T4 (fp16) ⚠️ **Note:** This repo contains only the **LoRA adapter weights**. You need access to the base model from Meta to use it. --- ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, BitsAndBytesConfig from peft import PeftModel import torch BASE = "meta-llama/Llama-3.2-1B-Instruct" ADAPTER = "streetelite/mnli-llama3.2-1b-qlora-10k" bnb = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True ) tokenizer = AutoTokenizer.from_pretrained(ADAPTER) model = AutoModelForSequenceClassification.from_pretrained( BASE, num_labels=3, quantization_config=bnb, torch_dtype=torch.float16, device_map="auto" ) model = PeftModel.from_pretrained(model, ADAPTER).eval() inputs = tokenizer( "A man is playing guitar.", "A person is making music.", return_tensors="pt", truncation=True ) with torch.inference_mode(): logits = model(**{k: v.to(model.device) for k, v in inputs.items()}).logits probs = logits.softmax(-1) print(probs) ``` ## Results ### GLUE dev (official) | Set | Accuracy | F1 (macro) | F1 (weighted) | MCC | Kappa | MAE | |------------|----------|------------|---------------|--------|--------|--------| | Matched | 82.37% | 0.8210 | 0.8224 | 0.7358 | 0.7349 | 0.2068 | | Mismatched | 83.71% | 0.8348 | 0.8360 | 0.7558 | 0.7550 | 0.1894 | --- ### Held-out test split (1k from train) | Accuracy | F1 (macro) | F1 (weighted) | MCC | Kappa | MAE | |----------|------------|---------------|--------|--------|--------| | 83.10% | 0.8280 | 0.8288 | 0.7496 | 0.7461 | 0.2010 | --- ## Training Details - **Framework:** Hugging Face Transformers + PEFT + bitsandbytes - **Quantization:** 4-bit NF4 w/ double quantization - **LoRA config:** r=8, alpha=16, dropout=0.1, target modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` - **Optimizer:** paged_adamw_8bit, lr=2e-4 - **Batch size:** 4 (gradient accumulation = 4 → effective 16) - **Epochs:** 2 - **Seed:** 42 - **Padding:** dynamic --- ## Intended Uses - **Primary:** Natural language inference on text pairs (entailment, neutral, contradiction). - **Languages:** English. - **Not intended for:** non-English inputs, factual question answering, safety-critical applications without human review. --- ## License - **Base model:** LLaMA 3.2 1B Instruct — [Meta license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) - **Adapter:** Apache License 2.0
AmanPriyanshu/gpt-oss-8.4b-specialized-health_or_medicine-pruned-moe-only-11-experts
AmanPriyanshu
2025-08-13T05:50:01Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "health-or-medicine", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T05:49:33Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - health-or-medicine - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Health Or Medicine GPT-OSS Model (11 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~8.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 11 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Health Or Medicine | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks - **Systematic Reduction**: Reduced from 32 to 11 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 34.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to health or medicine tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-health_or_medicine-pruned-moe-only-11-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-health_or_medicine-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-health_or_medicine-pruned-moe-only-11-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-health_or_medicine-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What are the main functions of the human heart?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on health or medicine tasks: ### Example 1 **Question:** What are the main functions of the human heart? **Response:** ``` <|channel|>analysis<|message|>Need to answer: main functions: pumping blood, regulating blood pressure, pumping blood to all body, pumping blood to brain, pumping blood to lungs, pumping blood to kidneys, pumping blood to heart. Also endocrine functions: hormone production. Also electrical conduction. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>**The human heart is a central organ that performs two main functions:** --- ### 1. **P pumping blood (circulatory function)** The heart’s “pump” keeps the blood moving through the body, ensuring that every organ receives oxygen, nutrients, and waste‑products. * **Right‑s side (right‑ventil)** – receives oxygen‑rich blood from the lungs and sends it to the body. * **Left‑s side (left‑ventil)** – receives oxygen‑poor blood from the body and sends it to the lungs. * The heart’s rhythm (the electrical conduction system) guarantees that the blood is pumped in a steady, coordinated way. --- ### 2. **Regulating blood pressure (he‑vascular function)** By adjusting the amount of blood pumped and the resistance of the blood vessels, the heart keeps the blood pressure within the normal range. * **When the heart pumps more blood** → blood pressure rises. * **When the heart pumps less blood** → blood pressure falls. The heart’s response to the body’s needs (e. g. exercise, stress, dehydration) is part of the autonomic nervous system. --- ### 3. **Hormine‑and‑ne‑h‑end‑function** The heart also produces hormones that help the body respond to stress and maintain fluid balance. * **A‑angi‑and‑ne‑h‑** (ad‑a‑and‑ne‑h‑) – a hormone that helps the blood vessels constren and dilate. * **Ne‑h‑** – a hormone that helps the kidneys retain water and salt. --- ### 4. **Electrical conduction system** The heart’s own electrical system (the sino‑atrial node, atr‑vent‑node, etc.) creates a rhythm that coordinates the pumping action. * **S‑node** – the natural pacemaker. * **AV‑node** – the electrical relay that slows the heart’s beat. * **Bundle of R‑r** – the conduction pathway that ensures the heart’s rhythm is coordinated. --- ### 5. **Mechanical‑and‑t ``` --- ### Example 2 **Question:** Explain the difference between bacteria and viruses. **Response:** ``` <|channel|>analysis<|message|>Need explain differences: size, shape, replication, host, etc.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Virons – The Key Differences** | Feature | Bacteria | Virons | |--------- |--------- |--------- | | **Definition** | Microorganisms that are *alive* and can grow, divide, and reproduce. | Microorganisms that are *not alive*; they are genetic material that can replicate inside a host cell. | | **Size & Shape** | Usually 1–2 µm in diameter; can be single‑cell (monococ) or multi‑cell (polococ). | 0.1–0.5 µm; often 1–2 strands of DNA, sometimes wrapped in a protein coat. | | **Cellular Structure** | Have a cell wall, cytoplasm, membrane, ribosomes, DNA, and sometimes mitochond‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ ``` --- ### Example 3 **Question:** What are the symptoms and causes of diabetes? **Response:** ``` <|channel|>analysis<|message|>Need to answer symptoms and causes.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic disease in which the body cannot produce or use insulin, a hormone that regulates blood‑glucose levels. The most common type is type 2 diabetes, but type 1 and other forms also exist. Below are the typical symptoms and the main causes for each type. --- ## 1. Symptoms (common across all types) | Symptom | Why it happens | How it shows | |--------|--------------|--------------| | **Freckly thirst (polpol)** | Blood glucose is high → kidneys try to ex‑elate → thirst | “I’m always thirsty” | | **Freckly urination** | Kidneys ex‑elate glucose → more urine | “I’m always peiling” | | **Weight loss** | Insulin resistance → body burns glucose → weight loss | “I’m losing weight” | | **Fatigue** | Body can’t use glucose → energy low | “I feel tired” | | **Poor wound healing** | High glucose → impacts blood flow & immune response | “My cuts heal slowly” | | **Vision problems** | High glucose → damages small blood vessels in the eye | “I have blurry vision” | | **Increased infections** | High glucose → weak immune system | “I get infections more often” | | **Switching** | High glucose → body tries to store fat → swelling | “I have swelling” | | **High blood pressure** | Insulin resistance → high blood pressure | “I have high blood pressure” | | **High cholesterol** | Insulin resistance → high cholesterol | “I have high cholesterol” | | **High triglycer** | Insulin resistance → high triglycer | “I have high triglycer” | | **High blood sugar** | Insulin resistance → high blood sugar | “I have high blood sugar” | > **Note**: Symptoms can be subtle or absent in early stages, especially in type 2 diabetes. Many people have “no‑symptoms” until the disease is advanced. --- ## 2. Causes ### 2‑1. Type 1 Diabetes (Autoimmune) | Cause | How it works | |--------|--------------| | **Autoimmune attack on β‑cells** | The immune system mistakenly destroys insulin‑producing β‑cells in the pancreas | No insulin → high blood glucose | | **Genetic predis** ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)