Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -14,16 +14,20 @@ Our project aims to develop an image classification system capable of distinguis
|
|
14 |
By leveraging a combination of classification techniques and machine learning, we aim to create a model that can accurately classify different types of images and detect the critical differences between works of art.
|
15 |
For this project, we utilized several models, including CNN, ELA, RESNET50, and VIT.
|
16 |
|
|
|
|
|
17 |
After building and running these models and evaluating their prediction results, this is the evaluation of Results:
|
18 |
|
19 |
-
It can be observed that, according to the *Accuracy* metric,
|
20 |
|
|
|
21 |
|
22 |
-
According to the *Recall* metric, we set a performance threshold of at least 80%, and there are two models that meet this requirement: the *CNN+ELA* model (83.5%) and the *ViT* model (95.7%).
|
23 |
|
24 |
The following table presents the test metric results for all the models implemented in this project.
|
25 |
|
26 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/66d6f28a19214d743ca1eb43/q6g7SAHT-enMFOkoXqxWc.png" alt="Description" width="500" style="display: block; margin-left: auto; margin-right: auto;"/>
|
27 |
|
28 |
|
29 |
-
**After comparing the different results, it can be seen that the model with the highest performance across all metrics is the ***ViT*** model, achieving the best results according to all the criteria we set in the initial phase.**
|
|
|
|
|
|
14 |
By leveraging a combination of classification techniques and machine learning, we aim to create a model that can accurately classify different types of images and detect the critical differences between works of art.
|
15 |
For this project, we utilized several models, including CNN, ELA, RESNET50, and VIT.
|
16 |
|
17 |
+
---
|
18 |
+
|
19 |
After building and running these models and evaluating their prediction results, this is the evaluation of Results:
|
20 |
|
21 |
+
<span style="font-size:18px"> It can be observed that, according to the *Accuracy* metric, two models meet the desired threshold of at least 85%, which are: the *CNN+ELA* model (85%) and the *ViT* model (92%).</span>
|
22 |
|
23 |
+
<span style="font-size:18px">According to the *Recall* metric, we set a performance threshold of at least 80%, and there are two models that meet this requirement: the *CNN+ELA* model (83.5%) and the *ViT* model (95.7%).</span>
|
24 |
|
|
|
25 |
|
26 |
The following table presents the test metric results for all the models implemented in this project.
|
27 |
|
28 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/66d6f28a19214d743ca1eb43/q6g7SAHT-enMFOkoXqxWc.png" alt="Description" width="500" style="display: block; margin-left: auto; margin-right: auto;"/>
|
29 |
|
30 |
|
31 |
+
**After comparing the different results, it can be seen that the model with the highest performance across all metrics is the ***ViT*** model, achieving the best results according to all the criteria we set in the initial phase.**
|
32 |
+
|
33 |
+
---
|