Text Generation
MLX
Safetensors
English
qwen2
conversational
8-bit precision
abalogh's picture
Add files using upload-large-folder tool
6e89909 verified
|
raw
history blame
1.01 kB
metadata
license: mit
library_name: mlx
datasets:
  - PrimeIntellect/verifiable-coding-problems
  - likaixin/TACO-verified
  - livecodebench/code_generation_lite
language:
  - en
base_model: agentica-org/DeepCoder-1.5B-Preview
pipeline_tag: text-generation
tags:
  - mlx

abalogh/DeepCoder-1.5B-Preview-8bit

This model abalogh/DeepCoder-1.5B-Preview-8bit was converted to MLX format from agentica-org/DeepCoder-1.5B-Preview using mlx-lm version 0.23.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("abalogh/DeepCoder-1.5B-Preview-8bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)