flohannes/mistral-7b-instruct-4bit-1k
The Model flohannes/mistral-7b-instruct-4bit-1k was converted to MLX format from mlx-community/Mistral-7B-Instruct-v0.2-4bit using mlx-lm version 0.17.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("flohannes/mistral-7b-instruct-4bit-1k")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-generation models for mlx library.