gbemilekeonilude commited on
Commit
308fb66
·
verified ·
1 Parent(s): 56230b8

Model save

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: EleutherAI/pythia-410m
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: oop_and_text_pythia_410m
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhenwu/code-text-pretraining/runs/5ksxkt02)
17
+ # oop_and_text_pythia_410m
18
+
19
+ This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.6227
22
+ - Accuracy: 0.2170
23
+ - Num Input Tokens Seen: 5734400
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 2e-05
43
+ - train_batch_size: 4
44
+ - eval_batch_size: 4
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 2
48
+ - total_train_batch_size: 8
49
+ - total_eval_batch_size: 8
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: cosine
52
+ - lr_scheduler_warmup_ratio: 0.03
53
+ - num_epochs: 3.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen |
58
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|
59
+ | No log | 0 | 0 | 3.0231 | 0.2075 | 0 |
60
+ | 1.8656 | 0.2092 | 50 | 2.0404 | 0.2358 | 409600 |
61
+ | 1.8788 | 0.4184 | 100 | 1.8193 | 0.2170 | 819200 |
62
+ | 1.7635 | 0.6276 | 150 | 1.6325 | 0.1887 | 1228800 |
63
+ | 1.6773 | 0.8368 | 200 | 1.6925 | 0.1887 | 1638400 |
64
+ | 1.6309 | 1.0460 | 250 | 1.6849 | 0.1934 | 2048000 |
65
+ | 1.5824 | 1.2552 | 300 | 1.8487 | 0.1840 | 2457600 |
66
+ | 1.8204 | 1.4644 | 350 | 1.6930 | 0.1887 | 2867200 |
67
+ | 1.6639 | 1.6736 | 400 | 1.6967 | 0.2123 | 3276800 |
68
+ | 1.5446 | 1.8828 | 450 | 1.6562 | 0.2217 | 3686400 |
69
+ | 1.569 | 2.0921 | 500 | 1.6151 | 0.2123 | 4096000 |
70
+ | 1.5797 | 2.3013 | 550 | 1.6244 | 0.2311 | 4505600 |
71
+ | 1.5543 | 2.5105 | 600 | 1.6461 | 0.2028 | 4915200 |
72
+ | 1.5691 | 2.7197 | 650 | 1.6240 | 0.2075 | 5324800 |
73
+ | 1.5852 | 2.9289 | 700 | 1.6227 | 0.2170 | 5734400 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.43.2
79
+ - Pytorch 2.4.0
80
+ - Datasets 2.20.0
81
+ - Tokenizers 0.19.1