bertweet-base-sentiment-tuned
This model is a fine-tuned version of vinai/bertweet-base on the EPFL CS-433 Text Classification dataset. It achieves the following results on the evaluation set:
- Loss: 0.2120
- Accuracy: 0.9126
- F1: 0.9126
- Precision: 0.9127
- Recall: 0.9126
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
0.5687 | 0.0400 | 707 | 0.3330 | 0.8668 | 0.8668 | 0.8672 | 0.8668 |
0.3066 | 0.0801 | 1414 | 0.2736 | 0.8852 | 0.8852 | 0.8855 | 0.8852 |
0.2733 | 0.1201 | 2121 | 0.2515 | 0.895 | 0.8950 | 0.8950 | 0.895 |
0.26 | 0.1601 | 2828 | 0.2419 | 0.8993 | 0.8993 | 0.8994 | 0.8993 |
0.254 | 0.2002 | 3535 | 0.2340 | 0.9035 | 0.9035 | 0.9035 | 0.9035 |
0.243 | 0.2402 | 4242 | 0.2333 | 0.9023 | 0.9023 | 0.9024 | 0.9023 |
0.2412 | 0.2802 | 4949 | 0.2306 | 0.901 | 0.9010 | 0.9015 | 0.901 |
0.2405 | 0.3203 | 5656 | 0.2281 | 0.9048 | 0.9048 | 0.9049 | 0.9048 |
0.233 | 0.3603 | 6363 | 0.2253 | 0.9071 | 0.9071 | 0.9073 | 0.9071 |
0.2357 | 0.4003 | 7070 | 0.2250 | 0.9073 | 0.9073 | 0.9079 | 0.9073 |
0.2321 | 0.4403 | 7777 | 0.2245 | 0.9051 | 0.9051 | 0.9051 | 0.9051 |
0.2335 | 0.4804 | 8484 | 0.2325 | 0.9029 | 0.9028 | 0.9045 | 0.9029 |
0.2341 | 0.5204 | 9191 | 0.2229 | 0.9082 | 0.9082 | 0.9083 | 0.9082 |
0.2295 | 0.5604 | 9898 | 0.2187 | 0.9087 | 0.9087 | 0.9088 | 0.9087 |
0.2281 | 0.6005 | 10605 | 0.2228 | 0.9055 | 0.9055 | 0.9058 | 0.9055 |
0.2293 | 0.6405 | 11312 | 0.2188 | 0.9087 | 0.9087 | 0.9087 | 0.9087 |
0.2286 | 0.6805 | 12019 | 0.2188 | 0.9087 | 0.9087 | 0.9087 | 0.9087 |
0.2262 | 0.7206 | 12726 | 0.2183 | 0.9105 | 0.9105 | 0.9105 | 0.9105 |
0.2255 | 0.7606 | 13433 | 0.2176 | 0.9082 | 0.9082 | 0.9084 | 0.9082 |
0.2204 | 0.8006 | 14140 | 0.2189 | 0.911 | 0.9110 | 0.9111 | 0.911 |
0.2256 | 0.8407 | 14847 | 0.2176 | 0.9083 | 0.9083 | 0.9086 | 0.9083 |
0.222 | 0.8807 | 15554 | 0.2145 | 0.9116 | 0.9116 | 0.9116 | 0.9116 |
0.2198 | 0.9207 | 16261 | 0.2155 | 0.9113 | 0.9113 | 0.9116 | 0.9113 |
0.2223 | 0.9608 | 16968 | 0.2177 | 0.9075 | 0.9075 | 0.9079 | 0.9075 |
0.2223 | 1.0008 | 17675 | 0.2147 | 0.9112 | 0.9112 | 0.9112 | 0.9112 |
0.2064 | 1.0408 | 18382 | 0.2157 | 0.9105 | 0.9105 | 0.9105 | 0.9105 |
0.2053 | 1.0809 | 19089 | 0.2153 | 0.9102 | 0.9102 | 0.9102 | 0.9102 |
0.2071 | 1.1209 | 19796 | 0.2133 | 0.9113 | 0.9113 | 0.9113 | 0.9113 |
0.2035 | 1.1609 | 20503 | 0.2165 | 0.913 | 0.9130 | 0.9130 | 0.913 |
0.2033 | 1.2010 | 21210 | 0.2153 | 0.9119 | 0.9119 | 0.9119 | 0.9119 |
0.2071 | 1.2410 | 21917 | 0.2144 | 0.9124 | 0.9124 | 0.9124 | 0.9124 |
0.2025 | 1.2810 | 22624 | 0.2132 | 0.913 | 0.9130 | 0.9131 | 0.913 |
0.2056 | 1.3210 | 23331 | 0.2158 | 0.9111 | 0.9111 | 0.9113 | 0.9111 |
0.2058 | 1.3611 | 24038 | 0.2127 | 0.9117 | 0.9117 | 0.9117 | 0.9117 |
0.2026 | 1.4011 | 24745 | 0.2150 | 0.9124 | 0.9124 | 0.9124 | 0.9124 |
0.2053 | 1.4411 | 25452 | 0.2155 | 0.9123 | 0.9123 | 0.9125 | 0.9123 |
0.2006 | 1.4812 | 26159 | 0.2143 | 0.9135 | 0.9135 | 0.9136 | 0.9135 |
0.2054 | 1.5212 | 26866 | 0.2123 | 0.9142 | 0.9142 | 0.9142 | 0.9142 |
0.2017 | 1.5612 | 27573 | 0.2154 | 0.9123 | 0.9123 | 0.9127 | 0.9123 |
0.2027 | 1.6013 | 28280 | 0.2117 | 0.9137 | 0.9137 | 0.9137 | 0.9137 |
0.2029 | 1.6413 | 28987 | 0.2136 | 0.9132 | 0.9132 | 0.9133 | 0.9132 |
0.2025 | 1.6813 | 29694 | 0.2136 | 0.9123 | 0.9123 | 0.9124 | 0.9123 |
0.2037 | 1.7214 | 30401 | 0.2121 | 0.9125 | 0.9125 | 0.9125 | 0.9125 |
0.2015 | 1.7614 | 31108 | 0.2123 | 0.9131 | 0.9131 | 0.9131 | 0.9131 |
0.201 | 1.8014 | 31815 | 0.2127 | 0.9127 | 0.9127 | 0.9127 | 0.9127 |
0.2017 | 1.8415 | 32522 | 0.2109 | 0.913 | 0.9130 | 0.9130 | 0.913 |
0.2003 | 1.8815 | 33229 | 0.2114 | 0.9132 | 0.9132 | 0.9132 | 0.9132 |
0.2012 | 1.9215 | 33936 | 0.2123 | 0.9131 | 0.9131 | 0.9132 | 0.9131 |
0.199 | 1.9616 | 34643 | 0.2120 | 0.9126 | 0.9126 | 0.9127 | 0.9126 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for DoDucAnh/bertweet-base-sentiment-tuned
Base model
vinai/bertweet-base