Update README.md
Browse files
README.md
CHANGED
|
@@ -17,14 +17,14 @@ library_name: transformers
|
|
| 17 |
---
|
| 18 |
|
| 19 |
### Model Card for Model ID
|
| 20 |
-
This model is designed for classifying images as either 'real' or 'fake-
|
| 21 |
|
| 22 |
-
Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the
|
| 23 |
|
| 24 |
### Model Description
|
| 25 |
|
| 26 |
This model leverages the Vision Transformer (ViT) architecture, which applies self-attention mechanisms to process images.
|
| 27 |
-
The model classifies images into two categories: 'real ' and 'fake -
|
| 28 |
It captures intricate patterns and features that help in distinguishing between the two categories without the need for Convolutional Neural Networks (CNNs).
|
| 29 |
|
| 30 |
### Direct Use
|
|
|
|
| 17 |
---
|
| 18 |
|
| 19 |
### Model Card for Model ID
|
| 20 |
+
This model is designed for classifying images as either 'real' or 'fake-AI generated' using a Vision Transformer (VIT) .
|
| 21 |
|
| 22 |
+
Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the recall test.
|
| 23 |
|
| 24 |
### Model Description
|
| 25 |
|
| 26 |
This model leverages the Vision Transformer (ViT) architecture, which applies self-attention mechanisms to process images.
|
| 27 |
+
The model classifies images into two categories: 'real ' and 'fake - AI generated'.
|
| 28 |
It captures intricate patterns and features that help in distinguishing between the two categories without the need for Convolutional Neural Networks (CNNs).
|
| 29 |
|
| 30 |
### Direct Use
|