RealGuardrails Models

This model was trained on the RealGuardrails dataset, an instruction-tuning dataset focused on improving system prompt adherence and precedence. In particular, it was trained via SFT on the systemmix split (150K examples) using our custom training library torchllms (yielding normster/RealGuardrails-Qwen2.5-7B-SFT), and then trained via DPO on the preferencemix split (30K examples), and converted back to a transformers compatible checkpoint.

Training Hyperparameters

Name Value
DPO beta 0.01
optimizer AdamW
batch size 128
learning rate 1e-5
lr scheduler cosine with 50 warmup steps
betas (0.9, 0.999)
eps 1e-8
weight decay 0
epochs 1
max grad norm 1.0
precision bf16
max length 4096
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for normster/RealGuardrails-Qwen2.5-7B-SFT-DPO

Base model

Qwen/Qwen2.5-7B
Finetuned
(216)
this model
Quantizations
1 model

Dataset used to train normster/RealGuardrails-Qwen2.5-7B-SFT-DPO

Collection including normster/RealGuardrails-Qwen2.5-7B-SFT-DPO