parameters guide
samplers guide
model generation
role play settings
quant selection
arm quants
iq quants vs q quants
optimal model setting
gibberish fixes
coherence
instructing following
quality generation
chat settings
quality settings
llamacpp server
llamacpp
lmstudio
sillytavern
koboldcpp
backyard
ollama
model generation steering
steering
model generation fixes
text generation webui
ggufs
exl2
full precision
quants
imatrix
neo imatrix
Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ This settings can also fix a number of model issues (any model) such as:
|
|
39 |
|
40 |
Likewise ALL the setting below can also improve model generation and/or general overall "smoothness" / "quality" of model operation.
|
41 |
|
42 |
-
Even if you are not using my models, you may find this document useful for any model available online.
|
43 |
|
44 |
If you are currently using model(s) that are difficult to "wrangle" then apply "Class 3" or "Class 4" settings to them.
|
45 |
|
@@ -47,6 +47,18 @@ This document will be updated over time too.
|
|
47 |
|
48 |
Please use the "community tab" for suggestions / edits / improvements.
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
51 |
PARAMETERS AND SAMPLERS
|
52 |
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
@@ -162,6 +174,8 @@ Keep in mind the biggest parameter / random "unknown" is your prompt.
|
|
162 |
|
163 |
A word change, rephrasing, punctation , even a comma, or semi-colon can drastically alter the output, even at min temp settings. CAPS also affect generation too.
|
164 |
|
|
|
|
|
165 |
<B>temp / temperature</B>
|
166 |
|
167 |
temperature (default: 0.8)
|
@@ -180,6 +194,8 @@ top-p sampling (default: 0.9, 1.0 = disabled)
|
|
180 |
|
181 |
If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results.
|
182 |
|
|
|
|
|
183 |
I use default of: .95 ;
|
184 |
|
185 |
<B>min-p</B>
|
@@ -190,6 +206,8 @@ Tokens with probability smaller than (min_p) * (probability of the most likely t
|
|
190 |
|
191 |
I use default: .05 ;
|
192 |
|
|
|
|
|
193 |
<B>top-k</B>
|
194 |
|
195 |
top-k sampling (default: 40, 0 = disabled)
|
@@ -198,32 +216,40 @@ Similar to top_p, but select instead only the top_k most likely tokens. Higher v
|
|
198 |
|
199 |
Bring this up to 80-120 for a lot more word choice, and below 40 for simpler word choices.
|
200 |
|
201 |
-
|
|
|
|
|
202 |
|
203 |
For an interesting test, set "temp" to 0 ; this will give you the SAME generation for a given prompt each time.
|
204 |
|
205 |
-
Then adjust a word, phrase, sentence etc
|
|
|
|
|
206 |
|
207 |
Keep in mind this will show model operation at its LEAST powerful/creative level and should NOT be used to determine if the model works for your use case(s).
|
208 |
|
209 |
-
Then test "at temp" to see the model in action. (5-10 generations recommended)
|
210 |
|
211 |
-
You can also use "temp=0" to test different quants of the same model to see generation differences. (roughly "BIAS").
|
212 |
|
213 |
-
Another option is testing different models (of the same quant) to see how each handles your prompt(s).
|
214 |
|
215 |
-
Then test "at temp" to see the MODELS in action. (5-10 generations recommended)
|
216 |
|
217 |
|
218 |
------------------------------------------------------------------------------
|
219 |
PENALITY SAMPLERS:
|
220 |
------------------------------------------------------------------------------
|
221 |
|
222 |
-
These samplers "trim" or "prune" output in real time.
|
|
|
|
|
|
|
|
|
223 |
|
224 |
CLASS 4: For these models it is important to activate / set all samplers as noted for maximum quality and control.
|
225 |
|
226 |
-
PRIMARY
|
227 |
|
228 |
<B>repeat-last-n</B>
|
229 |
|
@@ -240,7 +266,7 @@ This setting also works in conjunction with all other "rep pens" below.
|
|
240 |
|
241 |
This parameter is the "RANGE" of tokens looked at for the samplers directly below.
|
242 |
|
243 |
-
SECONDARIES
|
244 |
|
245 |
<B>repeat-penalty</B>
|
246 |
|
|
|
39 |
|
40 |
Likewise ALL the setting below can also improve model generation and/or general overall "smoothness" / "quality" of model operation.
|
41 |
|
42 |
+
Even if you are not using my models, you may find this document useful for any model (any quant / full source) available online.
|
43 |
|
44 |
If you are currently using model(s) that are difficult to "wrangle" then apply "Class 3" or "Class 4" settings to them.
|
45 |
|
|
|
47 |
|
48 |
Please use the "community tab" for suggestions / edits / improvements.
|
49 |
|
50 |
+
IMPORTANT:
|
51 |
+
|
52 |
+
Every parameter, sampler and advanced sampler here affects per token generation and overall generation quality.
|
53 |
+
|
54 |
+
This effect is cumulative especially with long output generation and/or multi-turn (chat, role play, COT).
|
55 |
+
|
56 |
+
Likewise because of how modern AIs/LLMs operate the previously generated (quality) of the tokens generated
|
57 |
+
affect the next tokens generated too.
|
58 |
+
|
59 |
+
You will get higher quality operation overall - stronger prose, better answers, and a higher quality adventure.
|
60 |
+
|
61 |
+
|
62 |
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
63 |
PARAMETERS AND SAMPLERS
|
64 |
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
|
|
174 |
|
175 |
A word change, rephrasing, punctation , even a comma, or semi-colon can drastically alter the output, even at min temp settings. CAPS also affect generation too.
|
176 |
|
177 |
+
Likewise the size, and complexity of your prompt impacts generation too ; especially clarity and direction.
|
178 |
+
|
179 |
<B>temp / temperature</B>
|
180 |
|
181 |
temperature (default: 0.8)
|
|
|
194 |
|
195 |
If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results.
|
196 |
|
197 |
+
Dropping this can simplify word choices but this works in conjunction with "top-k"
|
198 |
+
|
199 |
I use default of: .95 ;
|
200 |
|
201 |
<B>min-p</B>
|
|
|
206 |
|
207 |
I use default: .05 ;
|
208 |
|
209 |
+
Careful adjustment of this parameter can result in more "wordy" or "less wordy" generation but this works in conjunction with "top-k".
|
210 |
+
|
211 |
<B>top-k</B>
|
212 |
|
213 |
top-k sampling (default: 40, 0 = disabled)
|
|
|
216 |
|
217 |
Bring this up to 80-120 for a lot more word choice, and below 40 for simpler word choices.
|
218 |
|
219 |
+
As this parameter operates in conjection with "top-p" and "min-p" all three should be carefully adjusted one at a time.
|
220 |
+
|
221 |
+
<B>NOTE - "CORE" Testing with "TEMP":</B>
|
222 |
|
223 |
For an interesting test, set "temp" to 0 ; this will give you the SAME generation for a given prompt each time.
|
224 |
|
225 |
+
Then adjust a word, phrase, sentence etc in your prompt, and generate again to see the differences.
|
226 |
+
|
227 |
+
(you should use a "fresh" chat for each generation)
|
228 |
|
229 |
Keep in mind this will show model operation at its LEAST powerful/creative level and should NOT be used to determine if the model works for your use case(s).
|
230 |
|
231 |
+
Then test your prompt(s) "at temp" to see the model in action. (5-10 generations recommended)
|
232 |
|
233 |
+
You can also use "temp=0" to test different quants of the same model to see generation differences. (roughly minor "BIAS" changes which reflect math changes due to compress/mixtures differences between quants).
|
234 |
|
235 |
+
Another option is testing different models (at temp=0 AND of the same quant) to see how each handles your prompt(s).
|
236 |
|
237 |
+
Then test "at temp" with your prompt(s) to see the MODELS in action. (5-10 generations recommended)
|
238 |
|
239 |
|
240 |
------------------------------------------------------------------------------
|
241 |
PENALITY SAMPLERS:
|
242 |
------------------------------------------------------------------------------
|
243 |
|
244 |
+
These samplers "trim" or "prune" output in real time.
|
245 |
+
|
246 |
+
The longer the generation, the stronger overall effect but that all depends on "repeat-last-n" setting.
|
247 |
+
|
248 |
+
For creative use cases, these samplers can alter prose generation in interesting ways.
|
249 |
|
250 |
CLASS 4: For these models it is important to activate / set all samplers as noted for maximum quality and control.
|
251 |
|
252 |
+
<B>PRIMARY:</B>
|
253 |
|
254 |
<B>repeat-last-n</B>
|
255 |
|
|
|
266 |
|
267 |
This parameter is the "RANGE" of tokens looked at for the samplers directly below.
|
268 |
|
269 |
+
<B>SECONDARIES:</B>
|
270 |
|
271 |
<B>repeat-penalty</B>
|
272 |
|