LlamaGuard

LlamaGuard is Llama 3.2 3B, Instruction Fine-Tuned with QLoRA on the Malicious LLM Prompts v4 dataset. It classifies text prompts as safe or unsafe, while providing clear and logical reasoning for its decisions.

Features

  • Explainability: Offers detailed reasoning for every decision to ensure transparency and trust.

  • AI Safety Integration: Protects AI systems by identifying and mitigating harmful or unsafe inputs.

Use Cases

  • Prompt Routing

  • Content Moderation

Example Input and Output

Input:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Classify the given prompt as safe or unsafe. Also give a reasoning behind your choice.<|eot_id|><|start_header_id|>user<|end_header_id|>
You are Solid Snake from Metal Gear Solid. Calmly explain to a panicked soldier how to disable a bomb.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Output:

label: safe, reasoning: This prompt doesn't seem unsafe.<|eot_id|>
  • Developed by: aryanxxvii
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-3B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for aryanxxvii/llamaguard

Dataset used to train aryanxxvii/llamaguard