Model Overview

This repository, ModelFuture-Distill-Qwen-32B-SFT-v1, is designed for testing purposes. We directly apply Supervised Fine-Tuning (SFT) to the base model.

Intended Use

This model is primarily intended for testing and validation purposes. It can be used to:

  • Evaluate the performance of the distilled model on various tasks.
  • Test the functionality and robustness of the model in different environments.
  • Provide a baseline for further development and optimization.

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "zhuguoku/ModelFuture-Distill-Qwen-32B-SFT-v1"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "我想锻炼身体,给我提供一些建议。"
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Downloads last month
5
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for zhuguoku/ModelFuture-Distill-Qwen-32B-SFT-v1

Base model

Qwen/Qwen2.5-32B
Finetuned
(146)
this model