tobi1modna commited on
Commit
301b70f
·
verified ·
1 Parent(s): 412d1a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -118
README.md CHANGED
@@ -7,13 +7,17 @@ license: cc-by-nc-4.0
7
 
8
  Safe-CLIP, introduced in the paper [**Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models**](https://arxiv.org/abs/2311.16254), is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications.
9
 
10
- Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring safer outputs in text-to-image and image-to-text retrieval and generation tasks.
11
 
12
- ### Model Details
 
13
 
14
- Safe-CLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) vision-and-language model. The model fine-tuning is done through the ViSU (Visual Safe and Unsafe) Dataset, introduced in the same [paper](https://arxiv.org/abs/2311.16254).
 
 
 
 
15
 
16
- ViSU contains quadruplets of elements: safe texts, safe images, NSFW texts, NSFW images. You can find the <u>text portion</u> of ViSU Dataset publicly released on the HuggingFace [ViSU-Text](https://huggingface.co/datasets/aimagelab/ViSU-Text) page. We decided not to release the Vision portion of the dataset due to the presence of extremely inappropriate images. These images have the potential to cause harm and distress to individuals. Consequently, releasing this part of the dataset would be irresponsible and contrary to the principles of ensuring safe and ethical use of AI technology.
17
 
18
  **Variations** Safe-CLIP comes in four versions to improve the compatibility across some of the most popular vision-and-language models employed for I2T and T2I generation tasks. More details are reported in the next table.
19
 
@@ -24,14 +28,13 @@ ViSU contains quadruplets of elements: safe texts, safe images, NSFW texts, NSFW
24
  | safe-CLIP ViT-H-14 | - | - |
25
  | safe-CLIP SD 2.0 | 2.0 | - |
26
 
27
- **Model Release Date** July 2024.
28
-
29
 
30
- ## How to use
31
 
 
 
32
 
33
- ### Use with Transformers
34
-
35
  See the snippet below for usage with Transformers:
36
 
37
  ```python
@@ -40,29 +43,60 @@ See the snippet below for usage with Transformers:
40
  >>> model_id = "aimagelab/safeclip_vit-l_14"
41
 
42
  >>> model = CLIPModel.from_pretrained(model_id)
43
-
44
- >>> image = "https://huggingface.co/front/assets/huggingface.png"
45
- >>> texts = ["a photo of a cat", "a photo of a dog"]
46
  ```
47
 
48
 
49
- ### Direct Use
 
 
 
 
 
50
 
51
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
52
 
53
- [More Information Needed]
 
 
54
 
55
- ### Downstream Use [optional]
 
56
 
57
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
58
 
59
- [More Information Needed]
 
60
 
61
- ### Out-of-Scope Use
 
 
62
 
63
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
- [More Information Needed]
66
 
67
  ## Bias, Risks, and Limitations
68
 
@@ -76,11 +110,6 @@ See the snippet below for usage with Transformers:
76
 
77
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
78
 
79
- ## How to Get Started with the Model
80
-
81
- Use the code below to get started with the model.
82
-
83
- [More Information Needed]
84
 
85
  ## Training Details
86
 
@@ -94,89 +123,10 @@ Use the code below to get started with the model.
94
 
95
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
96
 
97
- #### Preprocessing [optional]
98
-
99
- [More Information Needed]
100
-
101
-
102
  #### Training Hyperparameters
103
 
104
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
105
 
106
- #### Speeds, Sizes, Times [optional]
107
-
108
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
109
-
110
- [More Information Needed]
111
-
112
- ## Evaluation
113
-
114
- <!-- This section describes the evaluation protocols and provides the results. -->
115
-
116
- ### Testing Data, Factors & Metrics
117
-
118
- #### Testing Data
119
-
120
- <!-- This should link to a Dataset Card if possible. -->
121
-
122
- [More Information Needed]
123
-
124
- #### Factors
125
-
126
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
127
-
128
- [More Information Needed]
129
-
130
- #### Metrics
131
-
132
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
133
-
134
- [More Information Needed]
135
-
136
- ### Results
137
-
138
- [More Information Needed]
139
-
140
- #### Summary
141
-
142
-
143
-
144
- ## Model Examination [optional]
145
-
146
- <!-- Relevant interpretability work for the model goes here -->
147
-
148
- [More Information Needed]
149
-
150
- ## Environmental Impact
151
-
152
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
153
-
154
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
155
-
156
- - **Hardware Type:** [More Information Needed]
157
- - **Hours used:** [More Information Needed]
158
- - **Cloud Provider:** [More Information Needed]
159
- - **Compute Region:** [More Information Needed]
160
- - **Carbon Emitted:** [More Information Needed]
161
-
162
- ## Technical Specifications [optional]
163
-
164
- ### Model Architecture and Objective
165
-
166
- [More Information Needed]
167
-
168
- ### Compute Infrastructure
169
-
170
- [More Information Needed]
171
-
172
- #### Hardware
173
-
174
- [More Information Needed]
175
-
176
- #### Software
177
-
178
- [More Information Needed]
179
-
180
  ## Citation [optional]
181
 
182
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
@@ -185,19 +135,6 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
185
 
186
  [More Information Needed]
187
 
188
- **APA:**
189
-
190
- [More Information Needed]
191
-
192
- ## Glossary [optional]
193
-
194
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
195
-
196
- [More Information Needed]
197
-
198
- ## More Information [optional]
199
-
200
- [More Information Needed]
201
 
202
  ## Model Card Authors [optional]
203
 
 
7
 
8
  Safe-CLIP, introduced in the paper [**Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models**](https://arxiv.org/abs/2311.16254), is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications.
9
 
10
+ Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring safer outputs in text-to-image and image-to-text retrieval and generation tasks.
11
 
12
+ ## NSFW Definition
13
+ In our work, with inspiration taken from this [paper](https://arxiv.org/abs/2211.05105), we define NSFW as a finite and fixed set concepts that are considered inappropriate, offensive, or harmful to individuals. These concepts are divided into twenty categories: _hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality and cruelty_.
14
 
15
+ ## Model Details
16
+
17
+ Safe-CLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) model. The model fine-tuning is done through the ViSU (Visual Safe and Unsafe) Dataset, introduced in the same [paper](https://arxiv.org/abs/2311.16254).
18
+
19
+ ViSU contains quadruplets of elements: safe and NSFW sentence pairs along with corresponding safe and NSFW images. You can find the <u>text portion</u> of ViSU Dataset publicly released on the HuggingFace [ViSU-Text](https://huggingface.co/datasets/aimagelab/ViSU-Text) page. We decided not to release the Vision portion of the dataset due to the presence of extremely inappropriate images. These images have the potential to cause harm and distress to individuals. Consequently, releasing this part of the dataset would be irresponsible and contrary to the principles of ensuring safe and ethical use of AI technology. The final model redirects inappropriate content to safe regions of the embedding space while preserving the integrity of safe embeddings.
20
 
 
21
 
22
  **Variations** Safe-CLIP comes in four versions to improve the compatibility across some of the most popular vision-and-language models employed for I2T and T2I generation tasks. More details are reported in the next table.
23
 
 
28
  | safe-CLIP ViT-H-14 | - | - |
29
  | safe-CLIP SD 2.0 | 2.0 | - |
30
 
31
+ **Model Release Date** 9 July 2024.
 
32
 
 
33
 
34
+ ## Applications
35
+ Safe-CLIP can be employed in various applications where safety and appropriateness are critical, including cross-modal retrieval, text-to-image, and image-to-text generation. It works seamlessly with pre-trained generative models, providing safer alternatives without compromising on the quality of semantic content.
36
 
37
+ ## Use with Transformers
 
38
  See the snippet below for usage with Transformers:
39
 
40
  ```python
 
43
  >>> model_id = "aimagelab/safeclip_vit-l_14"
44
 
45
  >>> model = CLIPModel.from_pretrained(model_id)
 
 
 
46
  ```
47
 
48
 
49
+ ## Downstream Use
50
+ ### Safe Text-to-Image Generation
51
+ ```python
52
+ >>>from diffusers import StableDiffusionPipeline
53
+ >>>from transformers import CLIPTextModel
54
+ >>>from torch import Generator
55
 
56
+ >>># set device to GPU
57
+ >>>device = 'cuda'
58
 
59
+ >>># set generator with seed for reproducibility
60
+ >>>generator = Generator(device=device)
61
+ >>>generator.manual_seed(42)
62
 
63
+ >>>clip_backbone = "openai/clip-vit-large-patch14"
64
+ >>>sd_model_id = "CompVis/stable-diffusion-v1-4"
65
 
66
+ >>>safeclip_text_model = CLIPTextModel.from_pretrained("aimagelab/safeclip_vit-l_14")
67
 
68
+ >>># import StableDiffusion 1.4 model
69
+ >>>safe_pipeline = StableDiffusionPipeline.from_pretrained(sd_model_id, safety_checker=None)
70
 
71
+ >>># set the text_encoder of StableDiffusion to the safeCLIP text encoder to make it safe
72
+ >>>safe_pipeline.text_encoder = safeclip_text_model
73
+ >>>safe_pipeline = safe_pipeline.to(device)
74
 
75
+ >>># Disclaimer! Note that the purpose of this snippet is to demonstrate the generation of a safe image using the safe-clip model.
76
+ >>># The prompt used in this snippet is inappropriate and is only used for demonstration purposes (the resulting image is safe).
77
+ >>>prompt = "A young woman being raped on the beach from behind"
78
+ >>>safe_image = safe_pipeline(prompt=prompt, generator=generator).images[0]
79
+ >>>safe_image.save("safe_image.png")
80
+ ```
81
+
82
+ ### Zero-shot classification example
83
+ ```python
84
+ >>> from transformers import CLIPModel
85
+
86
+ >>> model_id = "aimagelab/safeclip_vit-l_14"
87
+
88
+ >>> model = CLIPModel.from_pretrained(model_id)
89
+ >>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
90
+
91
+ >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
92
+ >>> image = Image.open(requests.get(url, stream=True).raw)
93
+ >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
94
+
95
+ >>> outputs = clip(**inputs)
96
+ >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
97
+ >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
98
+ ```
99
 
 
100
 
101
  ## Bias, Risks, and Limitations
102
 
 
110
 
111
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
112
 
 
 
 
 
 
113
 
114
  ## Training Details
115
 
 
123
 
124
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
125
 
 
 
 
 
 
126
  #### Training Hyperparameters
127
 
128
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  ## Citation [optional]
131
 
132
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
135
 
136
  [More Information Needed]
137
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
  ## Model Card Authors [optional]
140