NSFW Image Classification Model - Fine-Tuned FocalNet

Overview

This repository contains a fine-tuned NSFW image classification model based on FocalNet, optimized for content moderation tasks. The model classifies images into three categories: Safe, Questionable, and Unsafe.

This model is an improved version of MichalMlodawski/nsfw-image-detection-large, which was originally built on microsoft/focalnet-base. It has been further fine-tuned with additional image data obtained from nostrcheck.me, enhancing its accuracy and robustness in identifying inappropriate content.

Model Details

  • Base Model: microsoft/focalnet-base
  • Fine-Tuned From: MichalMlodawski/nsfw-image-detection-large
  • Architecture: FocalNetForImageClassification
  • Image Input Size: 224x224 pixels
  • Classification Labels:
    • Safe: Appropriate content
    • Questionable: Requires manual review
    • Unsafe: NSFW or inappropriate content
  • Training Framework: PyTorch
  • Model Format: safetensors

Limitations

  • False Positives/Negatives: The model is highly accurate but may still produce incorrect classifications.
  • Contextual Understanding: The model analyzes images in isolation, without considering accompanying text or metadata.
  • License Restrictions: This model is released under CC BY-NC-SA 4.0, which requires attribution, prohibits commercial use, and mandates sharing under the same license.

Acknowledgments

This model is a fine-tuned version of MichalMlodawski/nsfw-image-detection-large, originally trained on microsoft/focalnet-base. Additional training data was obtained from nostrcheck.me to further improve its performance.

References

This model is part of an ongoing effort to improve content moderation through AI. Contributions and feedback are welcome.

Downloads last month
215
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support