File size: 2,408 Bytes
2d55589
 
 
 
 
 
 
 
 
 
 
a888efe
5497004
 
 
2d55589
 
2304283
 
 
 
 
 
dc87827
 
 
 
2304283
 
 
 
 
2d55589
 
 
 
 
 
 
 
 
2304283
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5497004
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
datasets:
- Hypersniper/unity_api_2022_3
- ibranze/codellama_unity3d_v2
---

# Description

Qwen2.5-Coder-7B-Instruct trained on a merged dataset of Unity3d q&a from these two datasets:
[ibranze/codellama_unity3d_v2](https://huggingface.co/datasets/ibranze/codellama_unity3d_v2) (Full)
[Hypersniper/unity_api_2022_3](https://huggingface.co/datasets/Hypersniper/unity_api_2022_3) (5%)

15062 rows in total with a 10% validation split.

Trained with native chat template (minus tools usage, see this issue: https://github.com/unslothai/unsloth/issues/1053). With a little superficial testing done, it seems to respond well to the mistral template.


Consider this a preview as I develop a dataset of my own that I'm pleased with.



# Uploaded  model

- **Developed by:** neph1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# Training details

About 1 epoch.

Rank: 128

Alpha: 256

TrainingArguments(
        per_device_train_batch_size =2,
        gradient_accumulation_steps = 64,
        #max_steps=10,
        num_train_epochs=3,
        warmup_steps = 5,
        learning_rate = 1e-4,
        fp16 = not torch.cuda.is_bf16_supported(),
        bf16 = torch.cuda.is_bf16_supported(),
        logging_steps = 10,
        optim = "adamw_8bit",
        weight_decay = 0.01,
        lr_scheduler_type = "linear",
        seed = 3407,
        per_device_eval_batch_size = 2,
        eval_strategy="steps",
        eval_accumulation_steps = 64,
        eval_steps = 10,
        eval_delay = 0,
        save_strategy="steps",
        save_steps=25,
        report_to="none",
    ),


Step 	Training Loss 	Validation Loss
10 	2.097300 	1.165832
20 	1.058100 	1.013441
30 	0.898500 	0.969640
40 	0.866600 	0.943687
50 	0.847300 	0.926879
60 	0.838200 	0.903914
70 	0.797600 	0.888580
80 	0.777700 	0.873389
90 	0.793900 	0.859501
100 	0.725500 	0.846339
110 	0.739400 	0.843786
120 	0.675200 	0.833775