Commit
·
cae2984
1
Parent(s):
0b0d373
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- nbeerbower/mistral-nemo-bophades-12B
|
4 |
+
- nbeerbower/mistral-nemo-gutenberg-12B-v3
|
5 |
+
license: apache-2.0
|
6 |
+
library_name: transformers
|
7 |
+
tags:
|
8 |
+
- merge
|
9 |
+
- roleplay
|
10 |
+
- not-for-all-audiences
|
11 |
+
---
|
12 |
+
|
13 |
+
# Magnum-Instruct-DPO-12B
|
14 |
+
|
15 |
+
Similar 50/50 merge with the other Magnum-Instruct, but using model variants that have had extra dpo/orpo training on top of them beforehand. Not sure if it's better or not comparatively speaking to just using the original models yet, but it seamed fine enough during my limited testing and worth the upload for now.
|
16 |
+
|
17 |
+
Big thanks to the MistralAI and Anthracite/SillyTilly teams for the original models used, plus nbeerbower for the extra training done as well!
|
18 |
+
|
19 |
+
## Settings
|
20 |
+
|
21 |
+
Temperature @ 0.7
|
22 |
+
|
23 |
+
Min-P @ 0.02
|
24 |
+
|
25 |
+
Smoothing Factor @ 0.3
|
26 |
+
|
27 |
+
Smoothing Curve @ 1.5
|
28 |
+
|
29 |
+
DRY Multiplier (plus standard DRY settings) @ 0.8
|
30 |
+
|
31 |
+
Skip Special Tokens @ On
|
32 |
+
|
33 |
+
Everything else @ Off
|
34 |
+
|
35 |
+
### Prompt Format: Nemo-Mistral
|
36 |
+
|
37 |
+
```
|
38 |
+
[INST] user prompt[/INST] character response</s>[INST] user prompt[/INST]
|
39 |
+
```
|
40 |
+
|
41 |
+
### Models Merged
|
42 |
+
|
43 |
+
The following models were included in the merge:
|
44 |
+
|
45 |
+
https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B
|
46 |
+
|
47 |
+
https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v3
|