image/jpeg

A DPO fine tuned mhm-7b-v1.3 on Intel/orca_dpo_pairs

Based upon mistral. Created using dare_ties and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.

Just an experiment.

Downloads last month
905
Safetensors
Model size
7.24B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for h2m/mhm-7b-v1.3-DPO-1

Quantizations
2 models

Spaces using h2m/mhm-7b-v1.3-DPO-1 7