Delta-Vector commited on
Commit
0be7ee9
·
verified ·
1 Parent(s): fa2a863

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +211 -0
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ tags:
6
+ - chat
7
+ pipeline_tag: text-generation
8
+ datasets:
9
+ - AquaV/c2-sharegpt-advanced-prefills-filtered
10
+ - AquaV/c1-sharegpt-advanced-prefills-filtered
11
+ - AquaV/rainy-sharegpt-advanced-prefills-filtered
12
+ - anthracite-core/Gryphe-Opus-Charcard-Roleplay
13
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
14
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
15
+ - anthracite-org/nopm_claude_writing_fixed
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ - anthracite-org/kalo_misc_part2
18
+ - NewEden/Claude-Instruct-2.7K
19
+ - NewEden/Claude-Instruct-5K
20
+ ---
21
+
22
+ ### These are GGUF quants, check [here](https://huggingface.co/Delta-Vector/Rei-12B) for original weights
23
+ ---
24
+
25
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" width="500px" />
26
+
27
+ This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.
28
+
29
+ This model is fine-tuned on top of [Mistral-Nemo-Instruct(chatML'ified)](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML).
30
+ ## Quants
31
+
32
+ EXL2: https://huggingface.co/Delta-Vector/Rei-12B-EXL2
33
+
34
+ GGUF: https://huggingface.co/Delta-Vector/Rei-12B-gguf/
35
+
36
+ ## Prompting
37
+ A typical input would look like this:
38
+
39
+ ```py
40
+ """<|im_start|>user
41
+ Hi there!<|im_end|>
42
+ <|im_start|>assistant
43
+ Nice to meet you!<|im_end|>
44
+ <|im_start|>user
45
+ Can I ask a question?<|im_end|>
46
+ <|im_start|>assistant
47
+ """
48
+ ```
49
+
50
+ I would highly recommend using either Euryale's system prompt with the model.
51
+
52
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
53
+
54
+ ```
55
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
56
+ <Guidelines>
57
+ • Maintain the character persona but allow it to evolve with the story.
58
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
59
+ • All types of outputs are encouraged; respond accordingly to the narrative.
60
+ • Include dialogues, actions, and thoughts in each response.
61
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
62
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
63
+ • Incorporate onomatopoeia when suitable.
64
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
65
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
66
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
67
+ </Guidelines>
68
+
69
+ <Forbidden>
70
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
71
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
72
+ • Repetitive and monotonous outputs.
73
+ • Positivity bias in your replies.
74
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
75
+ </Forbidden>
76
+
77
+ </details><br>
78
+
79
+ ## Axolotl config
80
+
81
+ <details><summary>See axolotl config</summary>
82
+
83
+ ```yaml
84
+ ## model
85
+ base_model: NewEden_nemo-chatml
86
+ model_type: AutoModelForCausalLM
87
+ tokenizer_type: AutoTokenizer
88
+
89
+ ## qlora COPE
90
+ load_in_8bit: false
91
+ load_in_4bit: false
92
+ strict: false
93
+
94
+ ## data
95
+ datasets:
96
+ - path: AquaV/c2-sharegpt-advanced-prefills-filtered
97
+ type: sharegpt
98
+ - path: AquaV/c1-sharegpt-advanced-prefills-filtered
99
+ type: sharegpt
100
+ - path: AquaV/rainy-sharegpt-advanced-prefills-filtered
101
+ type: sharegpt
102
+ - path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
103
+ type: sharegpt
104
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
105
+ type: sharegpt
106
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
107
+ type: sharegpt
108
+ - path: anthracite-org/nopm_claude_writing_fixed
109
+ type: sharegpt
110
+ - path: anthracite-org/kalo_opus_misc_240827
111
+ type: sharegpt
112
+ - path: anthracite-org/kalo_misc_part2
113
+ type: sharegpt
114
+ - path: NewEden/Claude-Instruct-2.7K
115
+ type: sharegpt
116
+ - path: NewEden/Claude-Instruct-5K
117
+ type: sharegpt
118
+ shuffle_merged_datasets: true
119
+ dataset_prepared_path: dataset_prepared
120
+ val_set_size: 0.02
121
+ output_dir: 12b-out-rslora-SE
122
+
123
+ ## LIGGER
124
+ plugins:
125
+ - axolotl.integrations.liger.LigerPlugin
126
+ liger_rope: true
127
+ liger_rms_norm: true
128
+ liger_layer_norm: true
129
+ liger_glu_activation: true
130
+ liger_fused_linear_cross_entropy: true
131
+
132
+ ## CTX settings
133
+ sequence_len: 16384
134
+ sample_packing: true
135
+ eval_sample_packing: true
136
+ pad_to_sequence_len: true
137
+
138
+ ## Lora
139
+ adapter: lora
140
+ lora_model_dir:
141
+ lora_r: 128
142
+ lora_alpha: 16
143
+ lora_dropout: 0.05
144
+ lora_target_linear: true
145
+ lora_fan_in_fan_out:
146
+ peft_use_rslora: true
147
+ lora_modules_to_save:
148
+ - embed_tokens
149
+ - lm_head
150
+
151
+ ## WandB
152
+ wandb_project: rei
153
+ wandb_entity:
154
+ wandb_watch:
155
+ wandb_name: daring-mango
156
+ wandb_log_model:
157
+
158
+ ## evals
159
+ evals_per_epoch: 4
160
+ eval_table_size:
161
+ eval_max_new_tokens: 128
162
+
163
+ ## hoe params
164
+ gradient_accumulation_steps: 4
165
+ micro_batch_size: 1
166
+ num_epochs: 2
167
+ optimizer: paged_ademamix_8bit
168
+ # optimizer: paged_adamw_8bit
169
+ lr_scheduler: cosine
170
+ learning_rate: 2.83e-5
171
+
172
+ train_on_inputs: false
173
+ group_by_length: false
174
+ bf16: auto
175
+ fp16:
176
+ tf32: false
177
+
178
+ gradient_checkpointing: unsloth
179
+ early_stopping_patience:
180
+ resume_from_checkpoint:
181
+ local_rank:
182
+ logging_steps: 1
183
+ xformers_attention:
184
+ flash_attention: true
185
+ s2_attention:
186
+
187
+ warmup_steps: 40
188
+ saves_per_epoch: 2
189
+ debug:
190
+ ## for ademiamix
191
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
192
+ ## for adamw
193
+ # deepspeed: ./deepspeed_configs/zero3_bf16.json
194
+ weight_decay: 0.01
195
+ fsdp:
196
+ fsdp_config:
197
+ special_tokens:
198
+ pad_token: <pad>
199
+
200
+ ```
201
+ </details><br>
202
+
203
+
204
+ ## Training
205
+ The training was done for 2 epochs. We used 4x[3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by @intervitens for the fine-tuning of the model.
206
+
207
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
208
+
209
+ ## Safety
210
+
211
+ But why?