finetuning-sentiment-model-tweet-finalVersion
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.8178
- Precision Negative: 0.8125
- Recall Negative: 0.7222
- F1 Negative: 0.7647
- Precision Neutral: 0.8140
- Recall Neutral: 0.875
- F1 Neutral: 0.8434
- Precision Positive: 0.8889
- Recall Positive: 0.8571
- F1 Positive: 0.8727
- Accuracy: 0.8372
- Confusion Matrix: [[26, 9, 1], [5, 70, 5], [1, 7, 48]]
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Precision Negative | Recall Negative | F1 Negative | Precision Neutral | Recall Neutral | F1 Neutral | Precision Positive | Recall Positive | F1 Positive | Accuracy | Confusion Matrix |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.496 | 1.0 | 22 | 0.7011 | 0.875 | 0.5833 | 0.7 | 0.7792 | 0.75 | 0.7643 | 0.7183 | 0.9107 | 0.8031 | 0.7674 | [[21, 12, 3], [3, 60, 17], [0, 5, 51]] |
0.3789 | 2.0 | 44 | 0.6227 | 0.725 | 0.8056 | 0.7632 | 0.7582 | 0.8625 | 0.8070 | 0.9756 | 0.7143 | 0.8247 | 0.8023 | [[29, 7, 0], [10, 69, 1], [1, 15, 40]] |
0.1735 | 3.0 | 66 | 0.6720 | 0.7879 | 0.7222 | 0.7536 | 0.8 | 0.85 | 0.8242 | 0.8704 | 0.8393 | 0.8545 | 0.8198 | [[26, 9, 1], [6, 68, 6], [1, 8, 47]] |
0.1261 | 4.0 | 88 | 0.7001 | 0.8387 | 0.7222 | 0.7761 | 0.8046 | 0.875 | 0.8383 | 0.8704 | 0.8393 | 0.8545 | 0.8314 | [[26, 9, 1], [4, 70, 6], [1, 8, 47]] |
0.0555 | 5.0 | 110 | 0.7969 | 0.8387 | 0.7222 | 0.7761 | 0.8140 | 0.875 | 0.8434 | 0.8727 | 0.8571 | 0.8649 | 0.8372 | [[26, 9, 1], [4, 70, 6], [1, 7, 48]] |
0.035 | 6.0 | 132 | 0.8178 | 0.8125 | 0.7222 | 0.7647 | 0.8140 | 0.875 | 0.8434 | 0.8889 | 0.8571 | 0.8727 | 0.8372 | [[26, 9, 1], [5, 70, 5], [1, 7, 48]] |
Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.