Update README.md
Browse files
README.md
CHANGED
@@ -85,8 +85,9 @@ print(predict_sentiment("The acting was terrible, and the story made no sense.")
|
|
85 |
## Training Details
|
86 |
|
87 |
### Training Data
|
88 |
-
- The model was fine-tuned on the
|
89 |
-
- The dataset is
|
|
|
90 |
|
91 |
### Training Procedure
|
92 |
#### Preprocessing
|
@@ -108,14 +109,14 @@ print(predict_sentiment("The acting was terrible, and the story made no sense.")
|
|
108 |
|
109 |
### Testing Data, Factors & Metrics
|
110 |
#### Testing Data
|
111 |
-
- The model was evaluated on a
|
112 |
|
113 |
#### Metrics
|
114 |
-
- **Accuracy:**
|
115 |
- **Precision, Recall, F1-score:**
|
116 |
-
- **Precision:** 92,
|
117 |
-
- **Recall:**
|
118 |
-
- **F1-score:**
|
119 |
|
120 |
## Model Examination
|
121 |
- The model performs well on **general sentiment classification** but may struggle with **sarcasm, irony, or very short reviews**.
|
|
|
85 |
## Training Details
|
86 |
|
87 |
### Training Data
|
88 |
+
- The model was fine-tuned on the IMDB dataset (50,000 labeled movie reviews).
|
89 |
+
- The dataset is balanced (25,000 positive and 25,000 negative reviews).
|
90 |
+
- The training split consisted of 40,000 samples, while 5,000 samples were used for validation.
|
91 |
|
92 |
### Training Procedure
|
93 |
#### Preprocessing
|
|
|
109 |
|
110 |
### Testing Data, Factors & Metrics
|
111 |
#### Testing Data
|
112 |
+
- The model was evaluated on a 5,000-sample test set from the IMDB dataset.
|
113 |
|
114 |
#### Metrics
|
115 |
+
- **Accuracy:** 90,4%
|
116 |
- **Precision, Recall, F1-score:**
|
117 |
+
- **Precision:** 92,1%
|
118 |
+
- **Recall:** 88.2%
|
119 |
+
- **F1-score:** 90.0%
|
120 |
|
121 |
## Model Examination
|
122 |
- The model performs well on **general sentiment classification** but may struggle with **sarcasm, irony, or very short reviews**.
|