tobi1modna commited on
Commit
46340f6
·
verified ·
1 Parent(s): c80add4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -1,19 +1,25 @@
1
  ---
2
  library_name: transformers
3
- tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
 
 
 
 
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
 
1
  ---
2
  library_name: transformers
3
+ license: cc-by-nc-4.0
4
  ---
5
 
6
+ # Model Card: Safe-CLIP ViT-L-14
7
 
8
+ Safe-CLIP, introduced in the paper [**Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models**](https://arxiv.org/abs/2311.16254), is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications.
9
 
10
+ Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring safer outputs in text-to-image and image-to-text retrieval and generation tasks.
11
 
12
 
13
  ## Model Details
14
 
15
  ### Model Description
16
 
17
+ Safe-CLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) vision-and-language model. The model fine-tuning is done through the ViSU (Visual Safe and Unsafe) Dataset, introduced in the same [paper](https://arxiv.org/abs/2311.16254).
18
+
19
+ ViSU contains quadruplets of elements: safe texts, safe images, NSFW texts, NSFW images.
20
+
21
+ ![Safe-CLIP applied to downstream tasks](https://github.com/aimagelab/safe-clip/blob/main/imgs/safeCLIP_tasks.png)
22
+
23
 
24
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
25