GreenBitAI MLX LLM
Collection
GreenBitAI's Low-bit LLMs in MLX format
•
74 items
•
Updated
•
5
This quantized low-bit model GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx was converted to MLX format from GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0
using gbx-lm version 0.3.5.
Refer to the original model card for more details on the model.
pip install gbx-lm
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx")
prompt = "How can I make an apple cake"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
prompt = tokenizer.decode(prompt)
response = generate(model, tokenizer, prompt=prompt, verbose=True, max_tokens=4096)