luluw commited on
Commit
b30b8a6
·
verified ·
1 Parent(s): 5c8fa45

End of training

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - en
5
+ base_model: cardiffnlp/twitter-roberta-base-sentiment
6
+ tags:
7
+ - generated_from_trainer
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: twitter-roberta-base-sentiment-finetuned-sentiment
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # twitter-roberta-base-sentiment-finetuned-sentiment
19
+
20
+ This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the Twitter Sentiment Datasets dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4905
23
+ - Accuracy: 0.8123
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 2e-05
43
+ - train_batch_size: 32
44
+ - eval_batch_size: 32
45
+ - seed: 42
46
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 500
49
+ - num_epochs: 5
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
+ | 0.5275 | 1.0 | 1250 | 0.4646 | 0.8098 |
57
+ | 0.4013 | 2.0 | 2500 | 0.4905 | 0.8123 |
58
+ | 0.2941 | 3.0 | 3750 | 0.5455 | 0.8104 |
59
+ | 0.2136 | 4.0 | 5000 | 0.6100 | 0.8096 |
60
+
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.46.3
65
+ - Pytorch 2.5.1+cu121
66
+ - Datasets 3.1.0
67
+ - Tokenizers 0.20.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7bf8d2d032881bd8570bcd878f7d4e2b7dde4f77131270e2ca9c6318cdeed64b
3
  size 498615900
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e4c0f995c36269e4adac6d701ca003aa39a2fe5ff0d432903f0f2d3065b442e
3
  size 498615900
runs/Nov22_04-49-03_e256319355c4/events.out.tfevents.1732251031.e256319355c4.204.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dba481aeb24bf900ef68efc88bdcd7ba27e470d645b420316fc7408ca9fa198f
3
- size 7567
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55743fe8aad5830bdb9e417eea11935e301def14781ba8c5031e84c71d73a0e3
3
+ size 7921
runs/Nov22_04-49-03_e256319355c4/events.out.tfevents.1732253644.e256319355c4.204.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c968a52b0a2e14cdd0069eded125bf4544156e94cdf68bf6c67f2330029342c
3
+ size 411