Alt text

Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k

This model was converted to MLX format from mlx-community/Mistral-7B-Instruct-v0.2-8-bit-mlx. Refer to the original model card for more details on the model.

Using a DPO dataset by Argilla dataset

Use with mlx

pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k --prompt "What wights more 1kg of feathers or 0.5kg of steel?"
Downloads last month
19
Safetensors
Model size
1.24B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.