gvozdev/mistral-7b-dm-v0.1

The Model gvozdev/mistral-7b-dm-v0.1 was converted to MLX format from mistralai/Mistral-7B-v0.3 using mlx-lm version 0.14.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("gvozdev/mistral-7b-dm-v0.1")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
10
GGUF
Model size
7.25B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.