Prikol
I don't even know anymore
Overview
I have yet to try it UPD: it sucks, bleh
Sometimes mistakes {{user}} for {{char}} and can't think. Other than that, the behavior is similar to the predecessors.
It sometimes gives some funny replies tho, yay!
If you still want to give it a try, here's the cursed text completion preset for cursed models, which makes them somewhat bearable:
https://files.catbox.moe/qr3s64.json
Or this one:
https://files.catbox.moe/97xryh.json
Prompt format: Llama3
Quants
https://huggingface.co/bartowski/Nohobby_L3.3-Prikol-70B-v0.4-GGUF
Merge Details
Step1
base_model: sophosympatheia/Nova-Tempus-70B-v0.2
merge_method: model_stock
dtype: bfloat16
models:
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
tokenizer:
source: sophosympatheia/Nova-Tempus-70B-v0.2
Step2
models:
- model: unsloth/DeepSeek-R1-Distill-Llama-70B
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
select_topk:
- value: [0.18, 0.3, 0.32, 0.38, 0.32, 0.3]
- model: Nohobby/AbominationSnowPig
parameters:
select_topk:
- value: [0.1, 0.06, 0.05, 0.05, 0.08]
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
select_topk: 0.17
- model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B
parameters:
select_topk: 0.55
base_model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B
merge_method: sce
parameters:
int8_mask: true
rescale: true
normalize: true
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
- Downloads last month
- 14
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Nohobby/L3.3-Prikol-70B-v0.4
Merge model
this model