YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Qwen2.5-Coder-0.5B-Instruct-MLX - GGUF

Original model description:

license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct pipeline_tag: text-generation library_name: transformers tags: - code - codeqwen - chat - qwen - qwen-coder - mlx

TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX

The Model TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX was converted to MLX format from Qwen/Qwen2.5-Coder-0.5B-Instruct using mlx-lm version 0.20.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
0
GGUF
Model size
494M params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.