Tanabodee Limpaitoon commited on
Commit
0c7d498
·
verified ·
1 Parent(s): 62baf72

nvl-og/finetuned-ai

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
- license: llama3.1
4
- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,14 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # results
16
 
17
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - eval_loss: 0.0337
20
- - eval_runtime: 12.3558
21
- - eval_samples_per_second: 17.239
22
- - eval_steps_per_second: 2.185
23
- - epoch: 4.0
24
- - step: 214
25
 
26
  ## Model description
27
 
@@ -44,12 +39,23 @@ The following hyperparameters were used during training:
44
  - train_batch_size: 2
45
  - eval_batch_size: 8
46
  - seed: 42
47
- - gradient_accumulation_steps: 2
48
- - total_train_batch_size: 4
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: cosine
51
  - num_epochs: 5
52
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.45.1
 
1
  ---
2
  library_name: transformers
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-3b-instruct
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # results
16
 
17
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-3b-instruct](https://huggingface.co/meta-llama/Llama-3.2-3b-instruct) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.2300
 
 
 
 
 
20
 
21
  ## Model description
22
 
 
39
  - train_batch_size: 2
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 16
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - num_epochs: 5
47
 
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:----:|:---------------:|
52
+ | No log | 0.9720 | 13 | 0.4116 |
53
+ | 0.819 | 1.9439 | 26 | 0.3048 |
54
+ | 0.3283 | 2.9907 | 40 | 0.2464 |
55
+ | 0.3283 | 3.9626 | 53 | 0.2307 |
56
+ | 0.2244 | 4.8598 | 65 | 0.2300 |
57
+
58
+
59
  ### Framework versions
60
 
61
  - Transformers 4.45.1
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b7454a3729bf9f427a11944fbf17a5aba7f77e8014341cabc05d80e4087dd37
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfa6b14ee09d776a8eb13cb300a067e43dcb96b7fc1fe6803763f22d1d62e019
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:591690a4081655ea35af3343703376f227d6a378221aab73b7ee7477f68908dd
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:406a9b2fa21aeaa3215a662771e3a352512546d1ea722466ebf5e47c93919195
3
  size 1459729952
runs/Sep27_13-13-09_144f728d997d/events.out.tfevents.1727467990.144f728d997d.6579.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0c420b46740266421d7debe149bd3d78fb9e8ebf03f8ea7341d1a3a4fbf1d9f
3
- size 5454
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ae576dc8ad115556611af860dc59516bb0fccb16bf8785f9dda9b25cfe58bd1
3
+ size 7487