Menda-3b-Optim-200: Optimized GRPO-Tuned Qwen2.5 Model

Menda-3b-Optim-200 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with an optimized GRPO (Guided Reinforcement from Preference Optimization) methodology for 200 steps. This model shows significantly improved performance on reasoning benchmarks compared to the base model and previous GRPO checkpoints.

Model Details

  • Base Model: Qwen/Qwen2.5-3B-Instruct
  • Training Method: Optimized GRPO with enhanced reward functions
  • Training Steps: 200
  • Parameters: 3 billion
  • Context Length: 32K tokens
  • Training Data: GSM8K (mathematical reasoning)
  • Chat Template: Uses the Qwen2 chat template

Optimization Improvements

This model uses several key optimizations over the standard GRPO approach:

  1. Higher Learning Rate: 2e-5 (4x higher than standard)
  2. Improved Scheduler: Cosine with restarts
  3. Enhanced Reward Functions:
    • Continuous correctness rewards with partial credit
    • Multi-component reasoning quality assessment
    • Format validation with both strict and soft checks
  4. Adjusted Batch Processing: Optimized gradient accumulation

Benchmark Results

Menda-3b-Optim-200 has been evaluated on several standard benchmarks:

Benchmark Task Type Accuracy
ARC-Challenge Scientific Reasoning 50.0%
BoolQ Reading Comprehension 80.0%
HellaSwag Common Sense Reasoning 40.0%
Lambada Text Completion 70.0%
PIQA Physical Reasoning 90.0%
Winogrande Commonsense Reasoning 90.0%

MMLU Performance

MMLU Category Score
Overall 69.47%
Humanities 76.15%
Social Sciences 76.67%
STEM 60.53%
Other 69.23%

Key Strengths

  • Highest MMLU Score: This checkpoint achieves the highest overall MMLU score (69.47%) among all checkpoints in the training progression.
  • Strong Reasoning Capabilities: Excellent performance on reasoning tasks (90% on both PIQA and Winogrande).
  • Balanced Performance: Maintains strong performance across diverse tasks without significant trade-offs.
  • Efficient Training: Achieves superior results with fewer training steps than previous checkpoints.
  • Subject-Specific Excellence: Perfect 100% on High School Macroeconomics and 90%+ on multiple subjects.

Chat Format

This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:

<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant

When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the chat_template functionality.

Usage Examples

Basic Usage with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3b-Optim-200"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Chat Usage with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3b-Optim-200"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Give me a short introduction to large language models."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Training Configuration

The model was trained using the optimized GRPO methodology with the following configuration:

  • LoRA Rank: 128
  • Learning Rate: 2e-5
  • Optimizer: AdamW (8-bit)
  • Batch Size: 1 per device
  • Gradient Accumulation Steps: 8
  • Scheduler: Cosine with restarts
  • Training Samples: 100 examples from GSM8K

License

This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the Qwen2 license for details.

Downloads last month
10
Safetensors
Model size
3.09B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train weathermanj/Menda-3b-Optim-200

Evaluation results