Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,19 @@ sdk_version: 4.44.0
|
|
| 9 |
---
|
| 10 |
|
| 11 |
Our project aims to develop an image classification system capable of distinguishing between paintings created by humans and those generated by artificial intelligence.
|
|
|
|
| 12 |
By leveraging a combination of classification techniques and machine learning, we aim to create a model that can accurately classify different types of images and detect the critical differences between works of art.
|
| 13 |
For this project, we utilized several models, including CNN, ELA, RESNET50, and VIT.
|
| 14 |
-
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
Our project aims to develop an image classification system capable of distinguishing between paintings created by humans and those generated by artificial intelligence.
|
| 12 |
+
|
| 13 |
By leveraging a combination of classification techniques and machine learning, we aim to create a model that can accurately classify different types of images and detect the critical differences between works of art.
|
| 14 |
For this project, we utilized several models, including CNN, ELA, RESNET50, and VIT.
|
| 15 |
+
|
| 16 |
+
After building and running these models and evaluating their prediction results, this is the evaluation of Results:
|
| 17 |
+
|
| 18 |
+
It can be observed that, according to the *Accuracy* metric, two models meet the desired threshold of at least 85%, which are: the *CNN+ELA* model (85%) and the *ViT* model (92%).
|
| 19 |
+
|
| 20 |
+
According to the *Recall* metric, we set a performance threshold of at least 80%, and there are two models that meet this requirement: the *CNN+ELA* model (83.5%) and the *ViT* model (95.7%).
|
| 21 |
+
|
| 22 |
+
The following table presents the test metric results for all the models implemented in this project.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
After comparing the different results, it can be seen that the model with the highest performance across all metrics is the **ViT** model, achieving the best results according to all the criteria we set in the initial phase.
|