A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests.

This is the same unalignment training seen in concedo/Beepo-22B, so big thanks to concedo for the dataset.

Chat template is same as the original, ChatML.

Downloads last month
301
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign

Base model

Qwen/Qwen2.5-14B
Finetuned
(12)
this model
Finetunes
1 model
Merges
3 models
Quantizations
6 models

Collection including ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign