mlx-community/bigcode-starcoder2-15b-8bit
The Model mlx-community/bigcode-starcoder2-15b-8bit was converted to MLX format from bigcode/starcoder2-15b using mlx-lm version 0.21.1 by Focused.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/bigcode-starcoder2-15b-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Focused is a technology company at the forefront of AI-driven development, empowering organizations to unlock the full potential of artificial intelligence. From integrating innovative models into existing systems to building scalable, modern AI infrastructures, we specialize in delivering tailored, incremental solutions that meet you where you are. Curious how we can help with your AI next project? Get in Touch
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for mlx-community/bigcode-starcoder2-15b-8bit
Base model
bigcode/starcoder2-15bDataset used to train mlx-community/bigcode-starcoder2-15b-8bit
Evaluation results
- pass@1 on CruxEval-Iself-reported48.100
- pass@1 on DS-1000self-reported33.800
- accuracy on GSM8K (PAL)self-reported65.100
- pass@1 on HumanEval+self-reported37.800
- pass@1 on HumanEvalself-reported46.300
- edit-smiliarity on RepoBench-v1.1self-reported74.080