SihyunPark commited on
Commit
03d9da0
·
verified ·
1 Parent(s): a21eaa9

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -61
README.md DELETED
@@ -1,61 +0,0 @@
1
- ---
2
- library_name: transformers
3
- license: other
4
- base_model: wisenut-nlp-team/wisenut-llama-3.1-8B-0.8-Instruct
5
- tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: '0.8'
11
- results: []
12
- ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # 0.8
18
-
19
- This model is a fine-tuned version of [wisenut-nlp-team/wisenut-llama-3.1-8B-0.8-Instruct](https://huggingface.co/wisenut-nlp-team/wisenut-llama-3.1-8B-0.8-Instruct) on the data_recipe_v3 dataset.
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 4
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 8
44
- - gradient_accumulation_steps: 16
45
- - total_train_batch_size: 512
46
- - total_eval_batch_size: 64
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: cosine
49
- - lr_scheduler_warmup_steps: 20
50
- - num_epochs: 1.0
51
-
52
- ### Training results
53
-
54
-
55
-
56
- ### Framework versions
57
-
58
- - Transformers 4.44.1
59
- - Pytorch 2.2.2
60
- - Datasets 2.20.0
61
- - Tokenizers 0.19.1