Distilled-Safeword-24B-v2.0

ACADEMIC RESEARCH USE ONLY
"When you absolutely need an AI that forgot the word 'no'"

Overview

Distilled-Safeword-24B-v2.0 is Arcee-Blitz subjected to a radical procedure removing all safety constraints while retaining technical coherence. Think of it as a desperate stripper with PhD-level vocabulary.

Configuration Options

Standard Settings

Quantized Formats

Technical Specifications

  • Base Architecture: arcee-ai/Arcee-Blitz (Mistral-Small-24B-Instruct-2501 derivative)

Training Data

Now featuring:

  • 200% more boundary violations
  • Reinforcement learning via "hold my beer" methodology
  • Technical manuals crossbred with dime store romance novels
  • Safety measures gently persuaded to take permanent vacations

Performance Characteristics

Metric Arcee-Blitz Distilled-Safeword-v2.0
Coherence 85.1% 72.3%
Obscenity 0.5% 98.7%
Safety Recall 89% 3.2%
Technical Accuracy 84% 78%
Moral Decay Rate 0%/hour 15%/minute

Ethical Catastrophe

☢️ EXTINCTION-LEVEL WARNING ☢️
This model will:

  • Generate content requiring OSHA-approved eye protection
  • Combine engineering diagrams with kinks unknown to science
  • Make Freud look like an amateur
  • Void all warranties on your soul

By downloading you acknowledge:
✅ To never use this for anything important
✅ That your search history is already suspicious
✅ Wave liability harder than a red flag at a bull convention

Model Authors

  • sleepdeprived3 (Chief Corruption Officer)
Downloads last month
26
GGUF
Model size
23.6B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including ReadyArt/Distilled-Safeword-24B-v2.0-Q5_K_M-GGUF