prithivMLmods commited on
Commit
c7ed37a
Β·
verified Β·
1 Parent(s): 06939bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -8,4 +8,81 @@ datasets:
8
  - prithivMLmods/Deepfake-QA-10K-OPT
9
  ---
10
  ![9.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/CJCTKBIv92WwFYdnmGrFR.png)
 
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - prithivMLmods/Deepfake-QA-10K-OPT
9
  ---
10
  ![9.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/CJCTKBIv92WwFYdnmGrFR.png)
11
+ # **Deepfake-QualityAssess2.1-85M**
12
 
13
+ Deepfake-QualityAssess2.1-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`).
14
+
15
+ A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task.
16
+
17
+ ```python
18
+ id2label: {
19
+ "0": "Issue In Deepfake",
20
+ "1": "High Quality Deepfake"
21
+ }
22
+ ```
23
+
24
+ ```python
25
+ Classification report:
26
+
27
+ precision recall f1-score support
28
+
29
+ Issue In Deepfake 0.7851 0.7380 0.7610 2000
30
+ High Quality Deepfake 0.7765 0.8250 0.8000 2000
31
+
32
+ accuracy 0.7815 4000
33
+ macro avg 0.7808 0.7815 0.7805 4000
34
+ weighted avg 0.7808 0.7815 0.7805 4000
35
+ ```
36
+
37
+ # **Inference with Hugging Face Pipeline**
38
+ ```python
39
+ from transformers import pipeline
40
+
41
+ # Load the model
42
+ pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess2.1-85M", device=0)
43
+
44
+ # Predict on an image
45
+ result = pipe("path_to_image.jpg")
46
+ print(result)
47
+ ```
48
+
49
+ # **Inference with PyTorch**
50
+ ```python
51
+ from transformers import ViTForImageClassification, ViTImageProcessor
52
+ from PIL import Image
53
+ import torch
54
+
55
+ # Load the model and processor
56
+ model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.1-85M")
57
+ processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.1-85M")
58
+
59
+ # Load and preprocess the image
60
+ image = Image.open("path_to_image.jpg").convert("RGB")
61
+ inputs = processor(images=image, return_tensors="pt")
62
+
63
+ # Perform inference
64
+ with torch.no_grad():
65
+ outputs = model(**inputs)
66
+ logits = outputs.logits
67
+ predicted_class = torch.argmax(logits, dim=1).item()
68
+
69
+ # Map class index to label
70
+ label = model.config.id2label[predicted_class]
71
+ print(f"Predicted Label: {label}")
72
+ ```
73
+
74
+ # **Limitations of Deepfake-QualityAssess2.1-85M**
75
+ 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts.
76
+ 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance.
77
+ 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification.
78
+ 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training.
79
+ 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications.
80
+ 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made.
81
+ 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake.
82
+
83
+ # **Intended Use of Deepfake-QualityAssess2.1-85M**
84
+ - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality.
85
+ - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models.
86
+ - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis.
87
+ - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions.
88
+ - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.