Update README.md
Browse files
README.md
CHANGED
@@ -12,27 +12,27 @@ ResNet50 is a deep convolutional neural network with 50 layers, known for its "r
|
|
12 |
It allows training of very deep networks by adding shortcut connections that skip one or more layers, making it highly effective for image classification tasks.
|
13 |
Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the Recall test.
|
14 |
|
15 |
-
|
16 |
|
17 |
The following libraries or packages are required: numpy, pandas, tensorflow, keras, matplotlib, sklearn, cv2.
|
18 |
We prepare the data for the model by sorted the images into 2 types of folders which are divided equally(real art- labeled as 0, fake art- labeled as 1).
|
19 |
Our ResNet50 model is based on 2,800 images that have been resized and normalized, the files formats is PNG, JPG.
|
20 |
The images are divided into a training set that contains 90% from data and a testing set that contains the remaining 10%.
|
21 |
|
22 |
-
|
23 |
|
24 |
The model is pre-trained on 'ImageNet' that contains a large dataset of more than millions images.
|
25 |
It applies transfer learning, freezing initial layers of ResNet50, and training only the final layers.
|
26 |
The final layer, which makes the predictions, is a binary classification layer that uses a sigmoid activation function.
|
27 |
|
28 |
-
|
29 |
|
30 |
The model is trained using binary cross-entropy loss and the Adam optimizer.
|
31 |
The model validates itself during training using 20% of the training data as validation, independent of the test data, to monitor performance and avoid overfitting.
|
32 |
The model is trained for 5 epochs with a batch size of 32 and employs 4-fold cross-validation to ensure robust performance.
|
33 |
During each fold, the model's weights are saved after training, allowing for the reuse of the best-performing weights.
|
34 |
|
35 |
-
|
36 |
|
37 |
After training, the model is evaluated on the test set.
|
38 |
The following metrics are used to measure performance:
|
@@ -40,14 +40,32 @@ Accuracy: The percentage of correct classifications.
|
|
40 |
Precision, Recall, F1-Score: For evaluating the model’s classification ability on both real art and AI-generated art images.
|
41 |
Confusion Matrix: Provides insights into classification performance. Displays true positives, false positives, true negatives, and false negatives.
|
42 |
|
43 |
-
|
44 |
|
45 |
Place the images in the respective training and testing folders.
|
46 |
Preprocess the images by resizing and normalizing them.
|
47 |
Train the model using the provided code.
|
48 |
Evaluate the model on the test set.
|
49 |
|
50 |
-
|
51 |
|
52 |
Confusion Matrix: To visualize the classification performance.
|
53 |
-
Training and Validation Metrics: Plots for accuracy and loss over the epochs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
It allows training of very deep networks by adding shortcut connections that skip one or more layers, making it highly effective for image classification tasks.
|
13 |
Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the Recall test.
|
14 |
|
15 |
+
***Installation instructions***
|
16 |
|
17 |
The following libraries or packages are required: numpy, pandas, tensorflow, keras, matplotlib, sklearn, cv2.
|
18 |
We prepare the data for the model by sorted the images into 2 types of folders which are divided equally(real art- labeled as 0, fake art- labeled as 1).
|
19 |
Our ResNet50 model is based on 2,800 images that have been resized and normalized, the files formats is PNG, JPG.
|
20 |
The images are divided into a training set that contains 90% from data and a testing set that contains the remaining 10%.
|
21 |
|
22 |
+
***ResNet50 model architecture***
|
23 |
|
24 |
The model is pre-trained on 'ImageNet' that contains a large dataset of more than millions images.
|
25 |
It applies transfer learning, freezing initial layers of ResNet50, and training only the final layers.
|
26 |
The final layer, which makes the predictions, is a binary classification layer that uses a sigmoid activation function.
|
27 |
|
28 |
+
***Training Details***
|
29 |
|
30 |
The model is trained using binary cross-entropy loss and the Adam optimizer.
|
31 |
The model validates itself during training using 20% of the training data as validation, independent of the test data, to monitor performance and avoid overfitting.
|
32 |
The model is trained for 5 epochs with a batch size of 32 and employs 4-fold cross-validation to ensure robust performance.
|
33 |
During each fold, the model's weights are saved after training, allowing for the reuse of the best-performing weights.
|
34 |
|
35 |
+
***Performance Evaluation***
|
36 |
|
37 |
After training, the model is evaluated on the test set.
|
38 |
The following metrics are used to measure performance:
|
|
|
40 |
Precision, Recall, F1-Score: For evaluating the model’s classification ability on both real art and AI-generated art images.
|
41 |
Confusion Matrix: Provides insights into classification performance. Displays true positives, false positives, true negatives, and false negatives.
|
42 |
|
43 |
+
***To run the project***
|
44 |
|
45 |
Place the images in the respective training and testing folders.
|
46 |
Preprocess the images by resizing and normalizing them.
|
47 |
Train the model using the provided code.
|
48 |
Evaluate the model on the test set.
|
49 |
|
50 |
+
***Visualization results***
|
51 |
|
52 |
Confusion Matrix: To visualize the classification performance.
|
53 |
+
Training and Validation Metrics: Plots for accuracy and loss over the epochs.
|
54 |
+
|
55 |
+
***Results***
|
56 |
+
|
57 |
+
Test accuracy = 0.784
|
58 |
+
|
59 |
+
Test loss = 0.48
|
60 |
+
|
61 |
+
Precision = 0.76
|
62 |
+
|
63 |
+
Recall = 0.83
|
64 |
+
|
65 |
+
F1 = 0.79
|
66 |
+
|
67 |
+
*Confusion Matrix:*
|
68 |
+
|
69 |
+

|
70 |
+
|
71 |
+
|