yalhessi commited on
Commit
be351c4
·
verified ·
1 Parent(s): 23ce84b

End of training

Browse files
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: other
4
+ base_model: deepseek-ai/deepseek-coder-1.3b-base
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: lemexp-task1-template_small-deepseek-coder-1.3b-base-ddp-8lr-1bs
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # lemexp-task1-template_small-deepseek-coder-1.3b-base-ddp-8lr-1bs
16
+
17
+ This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.1927
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0008
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 2
41
+ - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - num_devices: 8
44
+ - total_train_batch_size: 8
45
+ - total_eval_batch_size: 16
46
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 12
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:-------:|:-----:|:---------------:|
55
+ | 0.3957 | 0.2001 | 1258 | 0.3826 |
56
+ | 0.3477 | 0.4002 | 2516 | 0.3397 |
57
+ | 0.3363 | 0.6003 | 3774 | 0.3289 |
58
+ | 0.3209 | 0.8004 | 5032 | 0.3253 |
59
+ | 0.3156 | 1.0005 | 6290 | 0.3069 |
60
+ | 0.3004 | 1.2006 | 7548 | 0.3068 |
61
+ | 0.3008 | 1.4007 | 8806 | 0.2984 |
62
+ | 0.2906 | 1.6008 | 10064 | 0.2998 |
63
+ | 0.2945 | 1.8009 | 11322 | 0.2932 |
64
+ | 0.2894 | 2.0010 | 12580 | 0.2823 |
65
+ | 0.2752 | 2.2010 | 13838 | 0.2808 |
66
+ | 0.2718 | 2.4011 | 15096 | 0.2829 |
67
+ | 0.2731 | 2.6012 | 16354 | 0.2806 |
68
+ | 0.2692 | 2.8013 | 17612 | 0.2707 |
69
+ | 0.2644 | 3.0014 | 18870 | 0.2728 |
70
+ | 0.2543 | 3.2015 | 20128 | 0.2660 |
71
+ | 0.2575 | 3.4016 | 21386 | 0.2628 |
72
+ | 0.2563 | 3.6017 | 22644 | 0.2645 |
73
+ | 0.2522 | 3.8018 | 23902 | 0.2550 |
74
+ | 0.2519 | 4.0019 | 25160 | 0.2550 |
75
+ | 0.2397 | 4.2020 | 26418 | 0.2544 |
76
+ | 0.2419 | 4.4021 | 27676 | 0.2483 |
77
+ | 0.2358 | 4.6022 | 28934 | 0.2477 |
78
+ | 0.2349 | 4.8023 | 30192 | 0.2466 |
79
+ | 0.234 | 5.0024 | 31450 | 0.2442 |
80
+ | 0.2212 | 5.2025 | 32708 | 0.2443 |
81
+ | 0.2221 | 5.4026 | 33966 | 0.2420 |
82
+ | 0.222 | 5.6027 | 35224 | 0.2322 |
83
+ | 0.2198 | 5.8028 | 36482 | 0.2319 |
84
+ | 0.2193 | 6.0029 | 37740 | 0.2315 |
85
+ | 0.2051 | 6.2030 | 38998 | 0.2245 |
86
+ | 0.2071 | 6.4031 | 40256 | 0.2249 |
87
+ | 0.2039 | 6.6031 | 41514 | 0.2309 |
88
+ | 0.2059 | 6.8032 | 42772 | 0.2184 |
89
+ | 0.2044 | 7.0033 | 44030 | 0.2175 |
90
+ | 0.1878 | 7.2034 | 45288 | 0.2172 |
91
+ | 0.1903 | 7.4035 | 46546 | 0.2123 |
92
+ | 0.1924 | 7.6036 | 47804 | 0.2105 |
93
+ | 0.1886 | 7.8037 | 49062 | 0.2087 |
94
+ | 0.1876 | 8.0038 | 50320 | 0.2063 |
95
+ | 0.1726 | 8.2039 | 51578 | 0.2109 |
96
+ | 0.1756 | 8.4040 | 52836 | 0.2097 |
97
+ | 0.1764 | 8.6041 | 54094 | 0.2045 |
98
+ | 0.1737 | 8.8042 | 55352 | 0.1993 |
99
+ | 0.1702 | 9.0043 | 56610 | 0.2031 |
100
+ | 0.1561 | 9.2044 | 57868 | 0.1991 |
101
+ | 0.158 | 9.4045 | 59126 | 0.1977 |
102
+ | 0.1568 | 9.6046 | 60384 | 0.1983 |
103
+ | 0.1583 | 9.8047 | 61642 | 0.1965 |
104
+ | 0.1591 | 10.0048 | 62900 | 0.1940 |
105
+ | 0.1419 | 10.2049 | 64158 | 0.1956 |
106
+ | 0.1434 | 10.4050 | 65416 | 0.1924 |
107
+ | 0.1411 | 10.6051 | 66674 | 0.1940 |
108
+ | 0.1418 | 10.8052 | 67932 | 0.1929 |
109
+ | 0.1393 | 11.0052 | 69190 | 0.1922 |
110
+ | 0.1279 | 11.2053 | 70448 | 0.1946 |
111
+ | 0.1287 | 11.4054 | 71706 | 0.1953 |
112
+ | 0.1274 | 11.6055 | 72964 | 0.1948 |
113
+ | 0.1259 | 11.8056 | 74222 | 0.1927 |
114
+
115
+
116
+ ### Framework versions
117
+
118
+ - PEFT 0.14.0
119
+ - Transformers 4.47.0
120
+ - Pytorch 2.5.1+cu124
121
+ - Datasets 3.2.0
122
+ - Tokenizers 0.21.0
adapter_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "deepseek-ai/deepseek-coder-1.3b-base",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 32,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "v_proj",
27
+ "q_proj"
28
+ ],
29
+ "task_type": "CAUSAL_LM",
30
+ "use_dora": false,
31
+ "use_rslora": false
32
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd13a4f14b517380d1c38dd4bced5100c97f039cd08c32ebe65b2a304f8f65fc
3
+ size 6304096
loss_plot.png ADDED
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1694631fa62abcefb4013c9a623a38b00e47cb47b2f18391f0e0be2d74ad7ea7
3
+ size 5496