Model Details

Reasoning Llama model series fine-tuned on microsoft/orca-math-word-problems-200k using GRPO(Group Relative Policy Optimization) reinforcement learning technique.

Base model: meta-llama/Llama-3.1-8B-Instruct

Parameters

  • learning_rate = 5e-6,
  • adam_beta1 = 0.9,
  • adam_beta2 = 0.99,
  • weight_decay = 0.1,
  • warmup_ratio = 0.1,
  • lr_scheduler_type = "cosine",
  • optim = "paged_adamw_8bit",

Suggested system prompt for reasoning

Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
Do not forget <reasoning></reasoning><answer></answer> tags.

Support:

If you find this work useful, you can support me! Buy Me A Coffee

Downloads last month
24
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for suayptalha/ThinkerLlama-8B-v1

Finetuned
(856)
this model
Quantizations
1 model

Dataset used to train suayptalha/ThinkerLlama-8B-v1