Qwen-2.5-7B-Reasoning (Fine-Tuned by HyperX-Sen)

πŸš€ Model Overview

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct, specifically optimized for advanced reasoning tasks. Fine-tuned on the OpenAI GSM8K dataset, it significantly enhances multi-step reasoning and problem-solving capabilities.

πŸ”§ Fine-Tuning Details

πŸ“ˆ Performance Improvements

Through fine-tuning on GSM8K, the model has improved in:

  • Mathematical reasoning
  • Step-by-step logical deduction
  • Commonsense reasoning
  • Word problem-solving

This makes it ideal for applications requiring high-level reasoning, such as AI tutoring, research assistance, and problem-solving AI agents.

πŸ›  How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "HyperX-Sen/Qwen-2.5-7B-Reasoning"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

SYSTEM_PROMPT = """
Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
"""

# Define the conversation
messages = [
    {"role": "system", "content": f"{SYSTEM_PROMPT}"},
    {"role": "user", "content": "What are the potential impacts of artificial intelligence on employment?"}
]

# Format the chat input
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Tokenize the formatted input
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

# Generate the response
output = model.generate(**inputs, max_length=512, do_sample=True, temperature=0.7)

# Decode and display the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)

πŸ™Œ Acknowledgments

A huge thanks to Qwen for providing the powerful Qwen2.5-7B-Instruct model, which served as the base for this fine-tuned version.

Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for HyperX-Sen/Qwen-2.5-7B-Reasoning

Base model

Qwen/Qwen2.5-7B
Finetuned
(735)
this model
Quantizations
1 model

Dataset used to train HyperX-Sen/Qwen-2.5-7B-Reasoning