DatToad commited on
Commit
6da0fd1
·
verified ·
1 Parent(s): 346739b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -23,9 +23,9 @@ The models in this merge are some of my favorites and I found I liked all of the
23
 
24
  Model_stock was the method used, it's very straightforward and quite fast, the bottleneck seemed to be my NVMe drive.
25
 
26
- All source models use ChatML prompt formatting and it responds very well. For testing purposes I am using a temperature of 1.08, rep pen of 0.03, and DRY with 0.6 (most Qwen models seem to need DRY). All other samplers are neutralized.
27
 
28
- My sysprompt is a modified version of Konnect's, but I expect you should be able to use this with your favorite.
29
 
30
  ## Merge Details
31
  ### Merge Method
 
23
 
24
  Model_stock was the method used, it's very straightforward and quite fast, the bottleneck seemed to be my NVMe drive.
25
 
26
+ All source models use ChatML prompt formatting and it responds very well. Consider the following settings (thanks Geechan!): Temp 1.25, MinP 0.02, XTC 0.15/probability 0.5, DRY 0.8. All other samplers neutralized. Chuluun seems to be able to work with higher temperatures than other Qwen models without losing coherency.
27
 
28
+ Konnect has released their [Qwenception](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception) sysprompts and settings, which work quite well with Chuluun.
29
 
30
  ## Merge Details
31
  ### Merge Method