metadata
license: other
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- mlx
datasets:
- argilla/dpo-mix-7k
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
pipeline_tag: text-generation
model-index:
- name: zephyr-7b-gemma
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 7.81
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
mlx-community/zephyr-7b-gemma-v0.1-4bit
This model was converted to MLX format from HuggingFaceH4/zephyr-7b-gemma-v0.1
.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/zephyr-7b-gemma-v0.1-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)