Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,102 @@ This model is a model that performed continued pre-training and fine-tuning (ins
|
|
16 |
|
17 |
### DUS(Depth Up-Scaling) and continued pre-training
|
18 |
Similar to the methodology disclosed in the paper, we expanded from 32 transformer blocks to 48 blocks and then continued pre-training with the public dataset. Pre-training was performed for 3 days using 4 `ml.g5.48xlarge` instances from AWS (NVIDIA A10G GPU x 32ea). For pre-training, we used a sample set from Wikipedia.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
### Fine-tuning
|
21 |
After performing pre-training, instruction tuning and alignment tuning were performed sequentially. This process only took about 10 hours using AWS `ml.g5.24xlarge` (NVIDIA A10G GPU x 4ea). The dataset used for instruction tuning is a sample set of the OpenOrca dataset, and the dataset used for alignment tuning is Intel's orca_dpo_pairs dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
### References
|
24 |
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
|
|
|
16 |
|
17 |
### DUS(Depth Up-Scaling) and continued pre-training
|
18 |
Similar to the methodology disclosed in the paper, we expanded from 32 transformer blocks to 48 blocks and then continued pre-training with the public dataset. Pre-training was performed for 3 days using 4 `ml.g5.48xlarge` instances from AWS (NVIDIA A10G GPU x 32ea). For pre-training, we used a sample set from Wikipedia.
|
19 |
+
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.
|
20 |
+
|
21 |
+
```json
|
22 |
+
{
|
23 |
+
"fp16": {
|
24 |
+
"enabled": "auto",
|
25 |
+
"loss_scale": 0,
|
26 |
+
"loss_scale_window": 1000,
|
27 |
+
"initial_scale_power": 16,
|
28 |
+
"hysteresis": 2,
|
29 |
+
"min_loss_scale": 1
|
30 |
+
},
|
31 |
+
|
32 |
+
"bf16": {
|
33 |
+
"enabled": "auto"
|
34 |
+
},
|
35 |
+
|
36 |
+
"optimizer": {
|
37 |
+
"type": "AdamW",
|
38 |
+
"params": {
|
39 |
+
"lr": "auto",
|
40 |
+
"betas": "auto",
|
41 |
+
"eps": "auto",
|
42 |
+
"weight_decay": "auto"
|
43 |
+
}
|
44 |
+
},
|
45 |
+
|
46 |
+
"scheduler": {
|
47 |
+
"type": "WarmupLR",
|
48 |
+
"params": {
|
49 |
+
"warmup_min_lr": "auto",
|
50 |
+
"warmup_max_lr": "auto",
|
51 |
+
"warmup_num_steps": "auto"
|
52 |
+
}
|
53 |
+
},
|
54 |
+
|
55 |
+
"zero_optimization": {
|
56 |
+
"stage": 2,
|
57 |
+
"allgather_partitions": true,
|
58 |
+
"allgather_bucket_size": 2e8,
|
59 |
+
"overlap_comm": true,
|
60 |
+
"reduce_scatter": true,
|
61 |
+
"reduce_bucket_size": 2e8,
|
62 |
+
"contiguous_gradients": true,
|
63 |
+
"cpu_offload": true
|
64 |
+
},
|
65 |
+
|
66 |
+
"gradient_accumulation_steps": "auto",
|
67 |
+
"gradient_clipping": "auto",
|
68 |
+
"train_batch_size": "auto",
|
69 |
+
"train_micro_batch_size_per_gpu": "auto"
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
Some hyperparameters are listed below.
|
74 |
+
```
|
75 |
+
batch_size: 2
|
76 |
+
num_epochs: 1
|
77 |
+
learning_rate: 3e-4
|
78 |
+
gradient_accumulation_steps: 8
|
79 |
+
lr_scheduler_type: "linear"
|
80 |
+
group_by_length: False
|
81 |
+
```
|
82 |
|
83 |
### Fine-tuning
|
84 |
After performing pre-training, instruction tuning and alignment tuning were performed sequentially. This process only took about 10 hours using AWS `ml.g5.24xlarge` (NVIDIA A10G GPU x 4ea). The dataset used for instruction tuning is a sample set of the OpenOrca dataset, and the dataset used for alignment tuning is Intel's orca_dpo_pairs dataset.
|
85 |
+
All fine-tuning was learned using QLoRA, and the batch sizes were set to 3 and 1, respectively. We used 1,024 for the context length. 2,048 is also possible, but applying DPO often runs out of memory on 24GB GPU memory, so we settled on 1,024.
|
86 |
+
Please see below for relevant code snippets.
|
87 |
+
|
88 |
+
```python
|
89 |
+
peft_config = LoraConfig(
|
90 |
+
r=8,
|
91 |
+
lora_alpha=16,
|
92 |
+
target_modules=["q_proj", "k_proj", "v_proj", "fc1", "fc2"],
|
93 |
+
lora_dropout=0.05,
|
94 |
+
bias="none",
|
95 |
+
task_type="CAUSAL_LM",
|
96 |
+
)
|
97 |
+
training_arguments = TrainingArguments(
|
98 |
+
output_dir="logs",
|
99 |
+
num_train_epochs=1,
|
100 |
+
per_device_train_batch_size=batch_size,
|
101 |
+
gradient_accumulation_steps=4,
|
102 |
+
optim="paged_adamw_8bit",
|
103 |
+
learning_rate=3e-4,
|
104 |
+
weight_decay=0.001,
|
105 |
+
bf16=True,
|
106 |
+
max_grad_norm=0.3,
|
107 |
+
max_steps=-1,
|
108 |
+
warmup_ratio=0.03,
|
109 |
+
group_by_length=True,
|
110 |
+
lr_scheduler_type="cosine",
|
111 |
+
report_to="wandb", ...
|
112 |
+
)
|
113 |
+
`
|
114 |
+
``
|
115 |
|
116 |
### References
|
117 |
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
|