4-bit OmniQuant quantized version of mistralai/Mistral-Small-24B-Instruct-2501 for inference with the Private LLM app.
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-generation models for mlc-llm library.
Model tree for numen-tech/Mistral-Small-24B-Instruct-2501-w4a16g128asym
Base model
mistralai/Mistral-Small-24B-Base-2501