prithivMLmods commited on
Commit
6008e6d
·
verified ·
1 Parent(s): d1f8ae4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -3
README.md CHANGED
@@ -1,3 +1,102 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+
7
+ ---
8
+
9
+ ## **Label Space: 5 Classes**
10
+
11
+ The model classifies each image into one of the following content categories:
12
+
13
+ ```
14
+ Class 0: "Anime Picture"
15
+ Class 1: "Hentai"
16
+ Class 2: "Normal"
17
+ Class 3: "Pornography"
18
+ Class 4: "Enticing or Sensual"
19
+ ```
20
+
21
+ ---
22
+
23
+ ## **Install Dependencies**
24
+
25
+ ```bash
26
+ pip install -q transformers torch pillow gradio
27
+ ```
28
+
29
+ ---
30
+
31
+ ## **Inference Code**
32
+
33
+ ```python
34
+ import gradio as gr
35
+ from transformers import AutoImageProcessor, SiglipForImageClassification
36
+ from PIL import Image
37
+ import torch
38
+
39
+ # Load model and processor
40
+ model_name = "prithivMLmods/siglip2-x256-explicit-content" # Replace with your model path if needed
41
+ model = SiglipForImageClassification.from_pretrained(model_name)
42
+ processor = AutoImageProcessor.from_pretrained(model_name)
43
+
44
+ # ID to Label mapping
45
+ id2label = {
46
+ "0": "Anime Picture",
47
+ "1": "Hentai",
48
+ "2": "Normal",
49
+ "3": "Pornography",
50
+ "4": "Enticing or Sensual"
51
+ }
52
+
53
+ def classify_explicit_content(image):
54
+ image = Image.fromarray(image).convert("RGB")
55
+ inputs = processor(images=image, return_tensors="pt")
56
+
57
+ with torch.no_grad():
58
+ outputs = model(**inputs)
59
+ logits = outputs.logits
60
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
61
+
62
+ prediction = {
63
+ id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
64
+ }
65
+
66
+ return prediction
67
+
68
+ # Gradio Interface
69
+ iface = gr.Interface(
70
+ fn=classify_explicit_content,
71
+ inputs=gr.Image(type="numpy"),
72
+ outputs=gr.Label(num_top_classes=5, label="Predicted Content Type"),
73
+ title="siglip2-x256-explicit-content",
74
+ description="Classifies images into explicit, suggestive, or safe categories (e.g., Hentai, Pornography, Normal)."
75
+ )
76
+
77
+ if __name__ == "__main__":
78
+ iface.launch()
79
+ ```
80
+
81
+ ---
82
+
83
+ ```py
84
+ Classification Report:
85
+ precision recall f1-score support
86
+
87
+ Anime Picture 0.8940 0.8718 0.8827 5600
88
+ Hentai 0.8961 0.8935 0.8948 4180
89
+ Normal 0.9100 0.8895 0.8997 5503
90
+ Pornography 0.9496 0.9654 0.9574 5600
91
+ Enticing or Sensual 0.9132 0.9429 0.9278 5600
92
+
93
+ accuracy 0.9137 26483
94
+ macro avg 0.9126 0.9126 0.9125 26483
95
+ weighted avg 0.9135 0.9137 0.9135 26483
96
+ ```
97
+
98
+ ---
99
+
100
+ ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/psonZ0OXSjqgLRDkFtRTh.png)
101
+
102
+ ---