internlm2_5-7b-chat-abliterated

Version 1.1 (Updated 9/1/2024): Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.

Check out the jupyter notebook for details of how this model was abliterated from internlm2_5-7b-chat.

Please check out my newer abliteration of glm-4-9b-chat. It's jupyter notebook is a little more developed than this one.

Logo

Downloads last month
9
Safetensors
Model size
7.74B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API does not yet support model repos that contain custom code.

Model tree for byroneverson/internlm2_5-7b-chat-abliterated

Finetuned
(12)
this model
Quantizations
2 models