File size: 6,277 Bytes
8c4db41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


Qwen2.5-Coder-0.5B-Instruct-MLX - GGUF
- Model creator: https://huggingface.co/TheBlueObserver/
- Original model: https://huggingface.co/TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q2_K.gguf) | Q2_K | 0.32GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K.gguf) | Q3_K | 0.33GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_0.gguf) | Q4_0 | 0.33GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K.gguf) | Q4_K | 0.37GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q4_1.gguf) | Q4_1 | 0.35GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_0.gguf) | Q5_0 | 0.37GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K.gguf) | Q5_K | 0.39GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q5_1.gguf) | Q5_1 | 0.39GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q6_K.gguf) | Q6_K | 0.47GB |
| [Qwen2.5-Coder-0.5B-Instruct-MLX.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-Coder-0.5B-Instruct-MLX-gguf/blob/main/Qwen2.5-Coder-0.5B-Instruct-MLX.Q8_0.gguf) | Q8_0 | 0.49GB |




Original model description:
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
---

# TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX

The Model [TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX](https://huggingface.co/TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX) was
converted to MLX format from [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct)
using mlx-lm version **0.20.2**.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```