Image Classification
Keras
litav commited on
Commit
7d41b2d
·
verified ·
1 Parent(s): ffd937f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -7
README.md CHANGED
@@ -11,14 +11,14 @@ This project provides a Convolutional Neural Network (CNN) model for classifying
11
  CNN is a type of deep learning model specifically designed to process and analyze visual data by applying convolutional layers that automatically detect patterns and features in images.
12
  Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the Recall test.
13
 
14
- *Installation instructions*
15
 
16
  The following libraries or packages are required: numpy, pandas, tensorflow, keras, matplotlib, sklearn, cv2.
17
  We prepare the data for the model by sorted the images into 2 types of folders which are divided equally(real art- labeled as 0, fake art- labeled as 1).
18
  Our CNN model is based on 2,800 images that have been resized and normalized, the files formats is PNG‬, JPG‬.
19
  The images are divided into a training set that contains 90% from data and a testing set that contains the remaining 10%.
20
 
21
- *CNN model architecture*
22
 
23
  Convolutional Layers: for feature extraction from images, applying 32 or 64 filters with a size of 3x3, the activation function used id ReLU .
24
  MaxPooling Layers: for reducing the spatial dimensions to a size of 2x2.
@@ -26,7 +26,10 @@ Flatten: converts the multi-dimensional output of previous layers into a one-dim
26
  Dropout Layer: to prevent overfitting with a thinning rate of 0.5 after the first Dense layer.
27
  Dense Layer: last layer of dense for classification with a sigmoid activation function.
28
 
29
- *Training Details*
 
 
 
30
 
31
  The model is trained using binary cross-entropy loss and the Adam optimizer. It is validated with 20% of the training data reserved for validation.
32
  The model employs 4-fold cross-validation to ensure robust performance.
@@ -35,7 +38,7 @@ EarlyStopping: Stops training if the validation accuracy ceases to improve for a
35
  ModelCheckpoint: Saves the best weights during training based on validation accuracy.
36
  The best-performing model from each fold is saved, and the model with the best weights overall is selected for final testing.
37
 
38
- *Performance Evaluation*
39
 
40
  After training, the model is evaluated on the test set. The following metrics are used to measure performance:
41
  Accuracy: The percentage of correct classifications.
@@ -43,14 +46,31 @@ Precision, Recall, F1-Score: For evaluating the model’s classification ability
43
  Confusion Matrix: Displays true positives, false positives, true negatives, and false negatives.
44
  Instructions
45
 
46
- *To run the project*
47
 
48
  Place the images in the respective training and testing folders.
49
  Preprocess the images by resizing and normalizing them.
50
  Train the model using the provided code.
51
  Evaluate the model on the test set.
52
 
53
- *Visualization results*
54
 
55
  Confusion Matrix: To visualize the classification performance.
56
- Training and Validation Metrics: Plots for accuracy and loss over the epochs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  CNN is a type of deep learning model specifically designed to process and analyze visual data by applying convolutional layers that automatically detect patterns and features in images.
12
  Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the Recall test.
13
 
14
+ ***Installation instructions***
15
 
16
  The following libraries or packages are required: numpy, pandas, tensorflow, keras, matplotlib, sklearn, cv2.
17
  We prepare the data for the model by sorted the images into 2 types of folders which are divided equally(real art- labeled as 0, fake art- labeled as 1).
18
  Our CNN model is based on 2,800 images that have been resized and normalized, the files formats is PNG‬, JPG‬.
19
  The images are divided into a training set that contains 90% from data and a testing set that contains the remaining 10%.
20
 
21
+ ***CNN model architecture***
22
 
23
  Convolutional Layers: for feature extraction from images, applying 32 or 64 filters with a size of 3x3, the activation function used id ReLU .
24
  MaxPooling Layers: for reducing the spatial dimensions to a size of 2x2.
 
26
  Dropout Layer: to prevent overfitting with a thinning rate of 0.5 after the first Dense layer.
27
  Dense Layer: last layer of dense for classification with a sigmoid activation function.
28
 
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66d6f28a19214d743ca1eb43/jU1a0NmnMOJS9EOu0K_aO.png)
30
+
31
+
32
+ ***Training Details***
33
 
34
  The model is trained using binary cross-entropy loss and the Adam optimizer. It is validated with 20% of the training data reserved for validation.
35
  The model employs 4-fold cross-validation to ensure robust performance.
 
38
  ModelCheckpoint: Saves the best weights during training based on validation accuracy.
39
  The best-performing model from each fold is saved, and the model with the best weights overall is selected for final testing.
40
 
41
+ ***Performance Evaluation***
42
 
43
  After training, the model is evaluated on the test set. The following metrics are used to measure performance:
44
  Accuracy: The percentage of correct classifications.
 
46
  Confusion Matrix: Displays true positives, false positives, true negatives, and false negatives.
47
  Instructions
48
 
49
+ ***To run the project***
50
 
51
  Place the images in the respective training and testing folders.
52
  Preprocess the images by resizing and normalizing them.
53
  Train the model using the provided code.
54
  Evaluate the model on the test set.
55
 
56
+ ***Visualization results***
57
 
58
  Confusion Matrix: To visualize the classification performance.
59
+ Training and Validation Metrics: Plots for accuracy and loss over the epochs.
60
+
61
+ ***Results***
62
+
63
+ Test accuracy = 0.77
64
+
65
+ Test loss = 0.49
66
+
67
+ Precision = 0.77
68
+
69
+ Recall = 0.77
70
+
71
+ F1 = 0.77
72
+
73
+ *Confusion Matrix:*
74
+
75
+
76
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66d6f28a19214d743ca1eb43/I8jkHlwQVVUNbO4dWbaQX.png)