# Cosmos Guardrail This page outlines a set of tools to ensure content safety in Cosmos. For implementation details, please consult the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai). ## Overview Our guardrail system consists of two stages: pre-Guard and post-Guard. Cosmos pre-Guard models are applied to text input, including input prompts and upsampled prompts. * Blocklist: a keyword list checker for detecting harmful keywords * Aegis: an LLM-based approach for blocking harmful prompts Cosmos post-Guard models are applied to video frames generated by Cosmos models. * Video Content Safety Filter: a classifier trained to distinguish between safe and unsafe video frames * Face Blur Filter: a face detection and blurring module ## Usage Cosmos Guardrail models are integrated into the diffusion and autoregressive world generation pipelines in this repo. Check out the [Cosmos Diffusion Documentation](../diffusion/README.md) and [Cosmos Autoregressive Documentation](../autoregressive/README.md) to download the Cosmos Guardrail checkpoints and run the end-to-end demo scripts with our Guardrail models.