Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Genetic Lemonade Unleashed

image/png

Inspired to learn how to merge by the Nevoria series from SteelSkull.

This model is the result of a few dozen different attempts of learning how to merge.

Designed for RP, this model is mostly uncensored and focused around striking a balance between writing style, creativity and intelligence.

SillyTavern Settings

Llam@ception recommended for sane defaults if unsure, import them to SillyTavern and they're plug n play.

Sampler Settings

  • Temp: 1
  • MinP: 0.02-0.05
  • Dry: 0.8, 1.75, 4

Temperature last, neutralize other samplers. This model natively strikes a balance of creativity & intelligence.

Instruct

Llama-3-Instruct-Names but you will need to uncheck "System same as user".

Quants

GGUF

EXL2

Merge Details

Merge Method

This model was merged using the SCE merge method.

merge_v6_base_E

models:
  - model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  - model: nbeerbower/llama3.1-kartoffeldes-70B
  - model: tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
select_topk: .15
merge_method: sce
base_model: meta-llama/Llama-3.3-70B-Instruct
out_dtype: bfloat16
dype: float32
tokenizer:
  source: base

Genetic Lemonade Unleashed

models:
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
  - model: crestf411/L3.1-nemotron-sunfall-v0.7.0
  - model: Sao10K/L3.1-70B-Hanami-x1
merge_method: sce
base_model: ./merge_v6_base_E
select_topk: 0.15
out_dtype: bfloat16
dype: float32
tokenizer:
  source: union
Downloads last month
20
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for zerofata/L3.3-GeneticLemonade-Unleashed-70B-4.5bpw-h6-exl2

Quantized
(4)
this model