L3.1-Moe-4x8B-v0.1

cover

This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models:

Heavily inspired by mlabonne/Beyonder-4x7B-v3.

Quantized models

GGUF by mradermacher

Configuration

base_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
    positive_prompts:
      - "chat"
      - "assistant"
      - "tell me"
      - "explain"
      - "I want"
  - source_model: sequelbox/Llama3.1-8B-PlumCode
    positive_prompts:
      - "code"
      - "python"
      - "javascript"
      - "programming"
      - "algorithm"
  - source_model: sequelbox/Llama3.1-8B-PlumMath
    positive_prompts:
      - "reason"
      - "math"
      - "mathematics"
      - "solve"
      - "count"
  - source_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
    positive_prompts:
      - "storywriting"
      - "write"
      - "scene"
      - "story"
      - "character"

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.15
IFEval (0-Shot) 43.47
BBH (3-Shot) 27.86
MATH Lvl 5 (4-Shot) 11.10
GPQA (0-shot) 1.23
MuSR (0-shot) 3.98
MMLU-PRO (5-shot) 27.27
Downloads last month
31
Safetensors
Model size
24.9B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for moeru-ai/L3.1-Moe-4x8B-v0.1

Collection including moeru-ai/L3.1-Moe-4x8B-v0.1

Evaluation results